Model Name
stringlengths
5
122
URL
stringlengths
28
145
Crawled Text
stringlengths
1
199k
text
stringlengths
180
199k
Awsaf/large-eren
https://huggingface.co/Awsaf/large-eren
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Awsaf/large-eren ### Model URL : https://huggingface.co/Awsaf/large-eren ### Model Description :
Axcel/DialoGPT-small-rick
https://huggingface.co/Axcel/DialoGPT-small-rick
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Axcel/DialoGPT-small-rick ### Model URL : https://huggingface.co/Axcel/DialoGPT-small-rick ### Model Description :
Axon/resnet18-v1
https://huggingface.co/Axon/resnet18-v1
This ResNet18 model was translated from the ONNX ResNetv1 model found at https://github.com/onnx/models/tree/main/vision/classification/resnet into Axon using AxonOnnx The following description is copied from the relevant description at the ONNX repository. These ResNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on ImageNet dataset which contains images from 1000 classes. ResNet models provide very high accuracies with affordable model sizes. They are ideal for cases when high accuracy of classification is required. ImageNet trained models are often used as the base layers for a transfer learning approach to training a model in your domain. Transfer learning can significantly reduce the processing necessary to train an accurate model in your domain. This model was published here with the expectation that it would be useful to the Elixir community for transfer learning and other similar approaches. Deeper neural networks are more difficult to train. Residual learning framework ease the training of networks that are substantially deeper. The research explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. It also provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset the residual nets were evaluated with a depth of up to 152 layers — 8× deeper than VGG nets but still having lower complexity. ResNet models consists of residual blocks and came up to counter the effect of deteriorating accuracies with more layers due to network not learning the initial layers. ResNet v1 uses post-activation for the residual blocks. All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (N x 3 x H x W), where N is the batch size, and H and W are expected to be at least 224. The inference was done using jpeg image. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. The transformation should preferably happen at preprocessing. The model outputs image scores for each of the 1000 classes of ImageNet. The post-processing involves calculating the softmax probability scores for each class. You can also sort them to report the most probable classes. Check imagenet_postprocess.py for code. Dataset used for train and validation: ImageNet (ILSVRC2012). Check imagenet_prep for guidelines on preparing the dataset. ResNetv1 Deep residual learning for image recognition He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016. ONNX source model onnx/models vision/classification/resnet resnet18-v1-7.onnx
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Axon/resnet18-v1 ### Model URL : https://huggingface.co/Axon/resnet18-v1 ### Model Description : This ResNet18 model was translated from the ONNX ResNetv1 model found at https://github.com/onnx/models/tree/main/vision/classification/resnet into Axon using AxonOnnx The following description is copied from the relevant description at the ONNX repository. These ResNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on ImageNet dataset which contains images from 1000 classes. ResNet models provide very high accuracies with affordable model sizes. They are ideal for cases when high accuracy of classification is required. ImageNet trained models are often used as the base layers for a transfer learning approach to training a model in your domain. Transfer learning can significantly reduce the processing necessary to train an accurate model in your domain. This model was published here with the expectation that it would be useful to the Elixir community for transfer learning and other similar approaches. Deeper neural networks are more difficult to train. Residual learning framework ease the training of networks that are substantially deeper. The research explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. It also provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset the residual nets were evaluated with a depth of up to 152 layers — 8× deeper than VGG nets but still having lower complexity. ResNet models consists of residual blocks and came up to counter the effect of deteriorating accuracies with more layers due to network not learning the initial layers. ResNet v1 uses post-activation for the residual blocks. All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (N x 3 x H x W), where N is the batch size, and H and W are expected to be at least 224. The inference was done using jpeg image. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. The transformation should preferably happen at preprocessing. The model outputs image scores for each of the 1000 classes of ImageNet. The post-processing involves calculating the softmax probability scores for each class. You can also sort them to report the most probable classes. Check imagenet_postprocess.py for code. Dataset used for train and validation: ImageNet (ILSVRC2012). Check imagenet_prep for guidelines on preparing the dataset. ResNetv1 Deep residual learning for image recognition He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016. ONNX source model onnx/models vision/classification/resnet resnet18-v1-7.onnx
Axon/resnet34-v1
https://huggingface.co/Axon/resnet34-v1
This ResNet34 model was translated from the ONNX ResNetv1 model found at https://github.com/onnx/models/tree/main/vision/classification/resnet into Axon using AxonOnnx The following description is copied from the relevant description at the ONNX repository. These ResNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on ImageNet dataset which contains images from 1000 classes. ResNet models provide very high accuracies with affordable model sizes. They are ideal for cases when high accuracy of classification is required. ImageNet trained models are often used as the base layers for a transfer learning approach to training a model in your domain. Transfer learning can significantly reduce the processing necessary to train an accurate model in your domain. This model was published here with the expectation that it would be useful to the Elixir community for transfer learning and other similar approaches. Deeper neural networks are more difficult to train. Residual learning framework ease the training of networks that are substantially deeper. The research explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. It also provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset the residual nets were evaluated with a depth of up to 152 layers — 8× deeper than VGG nets but still having lower complexity. ResNet models consists of residual blocks and came up to counter the effect of deteriorating accuracies with more layers due to network not learning the initial layers. ResNet v1 uses post-activation for the residual blocks. All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (N x 3 x H x W), where N is the batch size, and H and W are expected to be at least 224. The inference was done using jpeg image. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. The transformation should preferably happen at preprocessing. The model outputs image scores for each of the 1000 classes of ImageNet. The post-processing involves calculating the softmax probability scores for each class. You can also sort them to report the most probable classes. Check imagenet_postprocess.py for code. Dataset used for train and validation: ImageNet (ILSVRC2012). Check imagenet_prep for guidelines on preparing the dataset. ResNetv1 Deep residual learning for image recognition He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016. ONNX source model onnx/models vision/classification/resnet resnet34-v1-7.onnx
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Axon/resnet34-v1 ### Model URL : https://huggingface.co/Axon/resnet34-v1 ### Model Description : This ResNet34 model was translated from the ONNX ResNetv1 model found at https://github.com/onnx/models/tree/main/vision/classification/resnet into Axon using AxonOnnx The following description is copied from the relevant description at the ONNX repository. These ResNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on ImageNet dataset which contains images from 1000 classes. ResNet models provide very high accuracies with affordable model sizes. They are ideal for cases when high accuracy of classification is required. ImageNet trained models are often used as the base layers for a transfer learning approach to training a model in your domain. Transfer learning can significantly reduce the processing necessary to train an accurate model in your domain. This model was published here with the expectation that it would be useful to the Elixir community for transfer learning and other similar approaches. Deeper neural networks are more difficult to train. Residual learning framework ease the training of networks that are substantially deeper. The research explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. It also provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset the residual nets were evaluated with a depth of up to 152 layers — 8× deeper than VGG nets but still having lower complexity. ResNet models consists of residual blocks and came up to counter the effect of deteriorating accuracies with more layers due to network not learning the initial layers. ResNet v1 uses post-activation for the residual blocks. All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (N x 3 x H x W), where N is the batch size, and H and W are expected to be at least 224. The inference was done using jpeg image. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. The transformation should preferably happen at preprocessing. The model outputs image scores for each of the 1000 classes of ImageNet. The post-processing involves calculating the softmax probability scores for each class. You can also sort them to report the most probable classes. Check imagenet_postprocess.py for code. Dataset used for train and validation: ImageNet (ILSVRC2012). Check imagenet_prep for guidelines on preparing the dataset. ResNetv1 Deep residual learning for image recognition He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016. ONNX source model onnx/models vision/classification/resnet resnet34-v1-7.onnx
Axon/resnet50-v1
https://huggingface.co/Axon/resnet50-v1
This ResNet50 model was translated from the ONNX ResNetv1 model found at https://github.com/onnx/models/tree/main/vision/classification/resnet into Axon using AxonOnnx The following description is copied from the relevant description at the ONNX repository. These ResNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on ImageNet dataset which contains images from 1000 classes. ResNet models provide very high accuracies with affordable model sizes. They are ideal for cases when high accuracy of classification is required. ImageNet trained models are often used as the base layers for a transfer learning approach to training a model in your domain. Transfer learning can significantly reduce the processing necessary to train an accurate model in your domain. This model was published here with the expectation that it would be useful to the Elixir community for transfer learning and other similar approaches. Deeper neural networks are more difficult to train. Residual learning framework ease the training of networks that are substantially deeper. The research explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. It also provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset the residual nets were evaluated with a depth of up to 152 layers — 8× deeper than VGG nets but still having lower complexity. ResNet models consists of residual blocks and came up to counter the effect of deteriorating accuracies with more layers due to network not learning the initial layers. ResNet v1 uses post-activation for the residual blocks. All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (N x 3 x H x W), where N is the batch size, and H and W are expected to be at least 224. The inference was done using jpeg image. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. The transformation should preferably happen at preprocessing. The model outputs image scores for each of the 1000 classes of ImageNet. The post-processing involves calculating the softmax probability scores for each class. You can also sort them to report the most probable classes. Check imagenet_postprocess.py for code. Dataset used for train and validation: ImageNet (ILSVRC2012). Check imagenet_prep for guidelines on preparing the dataset. ResNetv1 Deep residual learning for image recognition He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016. ONNX source model onnx/models vision/classification/resnet resnet50-v1-7.onnx
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Axon/resnet50-v1 ### Model URL : https://huggingface.co/Axon/resnet50-v1 ### Model Description : This ResNet50 model was translated from the ONNX ResNetv1 model found at https://github.com/onnx/models/tree/main/vision/classification/resnet into Axon using AxonOnnx The following description is copied from the relevant description at the ONNX repository. These ResNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on ImageNet dataset which contains images from 1000 classes. ResNet models provide very high accuracies with affordable model sizes. They are ideal for cases when high accuracy of classification is required. ImageNet trained models are often used as the base layers for a transfer learning approach to training a model in your domain. Transfer learning can significantly reduce the processing necessary to train an accurate model in your domain. This model was published here with the expectation that it would be useful to the Elixir community for transfer learning and other similar approaches. Deeper neural networks are more difficult to train. Residual learning framework ease the training of networks that are substantially deeper. The research explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. It also provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset the residual nets were evaluated with a depth of up to 152 layers — 8× deeper than VGG nets but still having lower complexity. ResNet models consists of residual blocks and came up to counter the effect of deteriorating accuracies with more layers due to network not learning the initial layers. ResNet v1 uses post-activation for the residual blocks. All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (N x 3 x H x W), where N is the batch size, and H and W are expected to be at least 224. The inference was done using jpeg image. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. The transformation should preferably happen at preprocessing. The model outputs image scores for each of the 1000 classes of ImageNet. The post-processing involves calculating the softmax probability scores for each class. You can also sort them to report the most probable classes. Check imagenet_postprocess.py for code. Dataset used for train and validation: ImageNet (ILSVRC2012). Check imagenet_prep for guidelines on preparing the dataset. ResNetv1 Deep residual learning for image recognition He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016. ONNX source model onnx/models vision/classification/resnet resnet50-v1-7.onnx
Ayah/GPT2-DBpedia
https://huggingface.co/Ayah/GPT2-DBpedia
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayah/GPT2-DBpedia ### Model URL : https://huggingface.co/Ayah/GPT2-DBpedia ### Model Description : No model card New: Create and edit this model card directly on the website!
Ayato/DialoGTP-large-Yuri
https://huggingface.co/Ayato/DialoGTP-large-Yuri
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayato/DialoGTP-large-Yuri ### Model URL : https://huggingface.co/Ayato/DialoGTP-large-Yuri ### Model Description : No model card New: Create and edit this model card directly on the website!
Aybars/ModelOnTquad
https://huggingface.co/Aybars/ModelOnTquad
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Aybars/ModelOnTquad ### Model URL : https://huggingface.co/Aybars/ModelOnTquad ### Model Description : No model card New: Create and edit this model card directly on the website!
Aybars/ModelOnWhole
https://huggingface.co/Aybars/ModelOnWhole
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Aybars/ModelOnWhole ### Model URL : https://huggingface.co/Aybars/ModelOnWhole ### Model Description : No model card New: Create and edit this model card directly on the website!
Aybars/XLM_Turkish
https://huggingface.co/Aybars/XLM_Turkish
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Aybars/XLM_Turkish ### Model URL : https://huggingface.co/Aybars/XLM_Turkish ### Model Description : No model card New: Create and edit this model card directly on the website!
Ayham/albert_bert_summarization_cnn_dailymail
https://huggingface.co/Ayham/albert_bert_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/albert_bert_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/albert_bert_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/albert_distilgpt2_summarization_cnn_dailymail
https://huggingface.co/Ayham/albert_distilgpt2_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/albert_distilgpt2_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/albert_distilgpt2_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/albert_gpt2_Full_summarization_cnndm
https://huggingface.co/Ayham/albert_gpt2_Full_summarization_cnndm
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/albert_gpt2_Full_summarization_cnndm ### Model URL : https://huggingface.co/Ayham/albert_gpt2_Full_summarization_cnndm ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/albert_gpt2_summarization_cnndm
https://huggingface.co/Ayham/albert_gpt2_summarization_cnndm
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/albert_gpt2_summarization_cnndm ### Model URL : https://huggingface.co/Ayham/albert_gpt2_summarization_cnndm ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/albert_gpt2_summarization_xsum
https://huggingface.co/Ayham/albert_gpt2_summarization_xsum
This model is a fine-tuned version of on the xsum dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/albert_gpt2_summarization_xsum ### Model URL : https://huggingface.co/Ayham/albert_gpt2_summarization_xsum ### Model Description : This model is a fine-tuned version of on the xsum dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/albert_roberta_summarization_cnn_dailymail
https://huggingface.co/Ayham/albert_roberta_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/albert_roberta_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/albert_roberta_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/bert_bert_summarization_cnn_dailymail
https://huggingface.co/Ayham/bert_bert_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/bert_bert_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/bert_bert_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/bert_distilgpt2_summarization_cnn_dailymail
https://huggingface.co/Ayham/bert_distilgpt2_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/bert_distilgpt2_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/bert_distilgpt2_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/bert_gpt2_summarization_cnndm
https://huggingface.co/Ayham/bert_gpt2_summarization_cnndm
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/bert_gpt2_summarization_cnndm ### Model URL : https://huggingface.co/Ayham/bert_gpt2_summarization_cnndm ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/bert_gpt2_summarization_cnndm_new
https://huggingface.co/Ayham/bert_gpt2_summarization_cnndm_new
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/bert_gpt2_summarization_cnndm_new ### Model URL : https://huggingface.co/Ayham/bert_gpt2_summarization_cnndm_new ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/bert_gpt2_summarization_xsum
https://huggingface.co/Ayham/bert_gpt2_summarization_xsum
This model is a fine-tuned version of on the xsum dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/bert_gpt2_summarization_xsum ### Model URL : https://huggingface.co/Ayham/bert_gpt2_summarization_xsum ### Model Description : This model is a fine-tuned version of on the xsum dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/bert_roberta_summarization_cnn_dailymail
https://huggingface.co/Ayham/bert_roberta_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/bert_roberta_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/bert_roberta_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/bertgpt2_cnn
https://huggingface.co/Ayham/bertgpt2_cnn
This model is a fine-tuned version of on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/bertgpt2_cnn ### Model URL : https://huggingface.co/Ayham/bertgpt2_cnn ### Model Description : This model is a fine-tuned version of on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/distilbert_bert_summarization_cnn_dailymail
https://huggingface.co/Ayham/distilbert_bert_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/distilbert_bert_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/distilbert_bert_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/distilbert_distilgpt2_summarization_cnn_dailymail
https://huggingface.co/Ayham/distilbert_distilgpt2_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/distilbert_distilgpt2_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/distilbert_distilgpt2_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/distilbert_gpt2_summarization_cnndm
https://huggingface.co/Ayham/distilbert_gpt2_summarization_cnndm
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/distilbert_gpt2_summarization_cnndm ### Model URL : https://huggingface.co/Ayham/distilbert_gpt2_summarization_cnndm ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/distilbert_gpt2_summarization_xsum
https://huggingface.co/Ayham/distilbert_gpt2_summarization_xsum
This model is a fine-tuned version of on the xsum dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/distilbert_gpt2_summarization_xsum ### Model URL : https://huggingface.co/Ayham/distilbert_gpt2_summarization_xsum ### Model Description : This model is a fine-tuned version of on the xsum dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/distilbert_roberta_summarization_cnn_dailymail
https://huggingface.co/Ayham/distilbert_roberta_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/distilbert_roberta_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/distilbert_roberta_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/ernie_gpt2_summarization_cnn_dailymail
https://huggingface.co/Ayham/ernie_gpt2_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/ernie_gpt2_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/ernie_gpt2_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/roberta_bert_summarization_cnn_dailymail
https://huggingface.co/Ayham/roberta_bert_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/roberta_bert_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/roberta_bert_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/roberta_distilgpt2_summarization_cnn_dailymail
https://huggingface.co/Ayham/roberta_distilgpt2_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/roberta_distilgpt2_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/roberta_distilgpt2_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/roberta_gpt2_new_max64_summarization_cnndm
https://huggingface.co/Ayham/roberta_gpt2_new_max64_summarization_cnndm
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/roberta_gpt2_new_max64_summarization_cnndm ### Model URL : https://huggingface.co/Ayham/roberta_gpt2_new_max64_summarization_cnndm ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/roberta_gpt2_summarization_cnn_dailymail
https://huggingface.co/Ayham/roberta_gpt2_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. This model uses RoBerta encoder and GPT2 decoder and fine-tuned on the summarization task. It got Rouge scores as follows: Rouge1= 35.886 Rouge2= 16.292 RougeL= 23.499 To use its API: from transformers import RobertaTokenizerFast, GPT2Tokenizer, EncoderDecoderModel model = EncoderDecoderModel.from_pretrained("Ayham/roberta_gpt2_summarization_cnn_dailymail") input_tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base') output_tokenizer = GPT2Tokenizer.from_pretrained("gpt2") article = """Your Input Text""" input_ids = input_tokenizer(article, return_tensors="pt").input_ids output_ids = model.generate(input_ids) print(output_tokenizer.decode(output_ids[0], skip_special_tokens=True)) More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/roberta_gpt2_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/roberta_gpt2_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. This model uses RoBerta encoder and GPT2 decoder and fine-tuned on the summarization task. It got Rouge scores as follows: Rouge1= 35.886 Rouge2= 16.292 RougeL= 23.499 To use its API: from transformers import RobertaTokenizerFast, GPT2Tokenizer, EncoderDecoderModel model = EncoderDecoderModel.from_pretrained("Ayham/roberta_gpt2_summarization_cnn_dailymail") input_tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base') output_tokenizer = GPT2Tokenizer.from_pretrained("gpt2") article = """Your Input Text""" input_ids = input_tokenizer(article, return_tensors="pt").input_ids output_ids = model.generate(input_ids) print(output_tokenizer.decode(output_ids[0], skip_special_tokens=True)) More information needed More information needed The following hyperparameters were used during training:
Ayham/roberta_gpt2_summarization_xsum
https://huggingface.co/Ayham/roberta_gpt2_summarization_xsum
This model is a fine-tuned version of on the xsum dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/roberta_gpt2_summarization_xsum ### Model URL : https://huggingface.co/Ayham/roberta_gpt2_summarization_xsum ### Model Description : This model is a fine-tuned version of on the xsum dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/roberta_roberta_summarization_cnn_dailymail
https://huggingface.co/Ayham/roberta_roberta_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/roberta_roberta_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/roberta_roberta_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/robertagpt2_cnn
https://huggingface.co/Ayham/robertagpt2_cnn
This model is a fine-tuned version of on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/robertagpt2_cnn ### Model URL : https://huggingface.co/Ayham/robertagpt2_cnn ### Model Description : This model is a fine-tuned version of on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/robertagpt2_xsum
https://huggingface.co/Ayham/robertagpt2_xsum
This model is a fine-tuned version of on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/robertagpt2_xsum ### Model URL : https://huggingface.co/Ayham/robertagpt2_xsum ### Model Description : This model is a fine-tuned version of on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/robertagpt2_xsum2
https://huggingface.co/Ayham/robertagpt2_xsum2
This model is a fine-tuned version of on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/robertagpt2_xsum2 ### Model URL : https://huggingface.co/Ayham/robertagpt2_xsum2 ### Model Description : This model is a fine-tuned version of on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/robertagpt2_xsum4
https://huggingface.co/Ayham/robertagpt2_xsum4
This model is a fine-tuned version of on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/robertagpt2_xsum4 ### Model URL : https://huggingface.co/Ayham/robertagpt2_xsum4 ### Model Description : This model is a fine-tuned version of on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/xlmroberta_gpt2_summarization_xsum
https://huggingface.co/Ayham/xlmroberta_gpt2_summarization_xsum
This model is a fine-tuned version of on the xsum dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/xlmroberta_gpt2_summarization_xsum ### Model URL : https://huggingface.co/Ayham/xlmroberta_gpt2_summarization_xsum ### Model Description : This model is a fine-tuned version of on the xsum dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/xlmroberta_large_gpt2_summarization_cnndm
https://huggingface.co/Ayham/xlmroberta_large_gpt2_summarization_cnndm
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/xlmroberta_large_gpt2_summarization_cnndm ### Model URL : https://huggingface.co/Ayham/xlmroberta_large_gpt2_summarization_cnndm ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/xlnet_bert_summarization_cnn_dailymail
https://huggingface.co/Ayham/xlnet_bert_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/xlnet_bert_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/xlnet_bert_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/xlnet_distilgpt2_summarization_cnn_dailymail
https://huggingface.co/Ayham/xlnet_distilgpt2_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/xlnet_distilgpt2_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/xlnet_distilgpt2_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/xlnet_gpt2_summarization_cnn_dailymail
https://huggingface.co/Ayham/xlnet_gpt2_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/xlnet_gpt2_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/xlnet_gpt2_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/xlnet_gpt2_summarization_xsum
https://huggingface.co/Ayham/xlnet_gpt2_summarization_xsum
This model is a fine-tuned version of on the xsum dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/xlnet_gpt2_summarization_xsum ### Model URL : https://huggingface.co/Ayham/xlnet_gpt2_summarization_xsum ### Model Description : This model is a fine-tuned version of on the xsum dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/xlnet_gpt_xsum
https://huggingface.co/Ayham/xlnet_gpt_xsum
This model is a fine-tuned version of on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/xlnet_gpt_xsum ### Model URL : https://huggingface.co/Ayham/xlnet_gpt_xsum ### Model Description : This model is a fine-tuned version of on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/xlnet_roberta_new_summarization_cnn_dailymail
https://huggingface.co/Ayham/xlnet_roberta_new_summarization_cnn_dailymail
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/xlnet_roberta_new_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/xlnet_roberta_new_summarization_cnn_dailymail ### Model Description : No model card New: Create and edit this model card directly on the website!
Ayham/xlnet_roberta_summarization_cnn_dailymail
https://huggingface.co/Ayham/xlnet_roberta_summarization_cnn_dailymail
This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/xlnet_roberta_summarization_cnn_dailymail ### Model URL : https://huggingface.co/Ayham/xlnet_roberta_summarization_cnn_dailymail ### Model Description : This model is a fine-tuned version of on the cnn_dailymail dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayham/xlnetgpt2_xsum7
https://huggingface.co/Ayham/xlnetgpt2_xsum7
This model is a fine-tuned version of on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayham/xlnetgpt2_xsum7 ### Model URL : https://huggingface.co/Ayham/xlnetgpt2_xsum7 ### Model Description : This model is a fine-tuned version of on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Ayjayo/DialoGPT-medium-AyjayoAI
https://huggingface.co/Ayjayo/DialoGPT-medium-AyjayoAI
#Ayjayo
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayjayo/DialoGPT-medium-AyjayoAI ### Model URL : https://huggingface.co/Ayjayo/DialoGPT-medium-AyjayoAI ### Model Description : #Ayjayo
Aymene/opus-mt-en-ro-finetuned-en-to-ro
https://huggingface.co/Aymene/opus-mt-en-ro-finetuned-en-to-ro
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Aymene/opus-mt-en-ro-finetuned-en-to-ro ### Model URL : https://huggingface.co/Aymene/opus-mt-en-ro-finetuned-en-to-ro ### Model Description : No model card New: Create and edit this model card directly on the website!
Ayoola/cdial-yoruba-test
https://huggingface.co/Ayoola/cdial-yoruba-test
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayoola/cdial-yoruba-test ### Model URL : https://huggingface.co/Ayoola/cdial-yoruba-test ### Model Description : No model card New: Create and edit this model card directly on the website!
Ayoola/pytorch_model
https://huggingface.co/Ayoola/pytorch_model
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayoola/pytorch_model ### Model URL : https://huggingface.co/Ayoola/pytorch_model ### Model Description : No model card New: Create and edit this model card directly on the website!
Ayoola/wav2vec2-large-xlsr-turkish-demo-colab
https://huggingface.co/Ayoola/wav2vec2-large-xlsr-turkish-demo-colab
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayoola/wav2vec2-large-xlsr-turkish-demo-colab ### Model URL : https://huggingface.co/Ayoola/wav2vec2-large-xlsr-turkish-demo-colab ### Model Description : No model card New: Create and edit this model card directly on the website!
Ayou/chinese_mobile_bert
https://huggingface.co/Ayou/chinese_mobile_bert
在2.5亿的中文语料上,进行mobie_bert进行预训练。在单卡-A100下迭代100万 steps,训练15天。
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayou/chinese_mobile_bert ### Model URL : https://huggingface.co/Ayou/chinese_mobile_bert ### Model Description : 在2.5亿的中文语料上,进行mobie_bert进行预训练。在单卡-A100下迭代100万 steps,训练15天。
Ayran/DialoGPT-medium-harry-1
https://huggingface.co/Ayran/DialoGPT-medium-harry-1
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayran/DialoGPT-medium-harry-1 ### Model URL : https://huggingface.co/Ayran/DialoGPT-medium-harry-1 ### Model Description : No model card New: Create and edit this model card directly on the website!
Ayran/DialoGPT-medium-harry-potter-1-through-3
https://huggingface.co/Ayran/DialoGPT-medium-harry-potter-1-through-3
#DialoGPT medium model (Harry Potter 1-3)
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayran/DialoGPT-medium-harry-potter-1-through-3 ### Model URL : https://huggingface.co/Ayran/DialoGPT-medium-harry-potter-1-through-3 ### Model Description : #DialoGPT medium model (Harry Potter 1-3)
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6-e18
https://huggingface.co/Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6-e18
#DialoGPT medium model (Based on Harry Potter 1 through 4 plus 6, 18 epochs)
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6-e18 ### Model URL : https://huggingface.co/Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6-e18 ### Model Description : #DialoGPT medium model (Based on Harry Potter 1 through 4 plus 6, 18 epochs)
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6
https://huggingface.co/Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6
#DialoGPT medium model (Harry Potter 1 through 4 plus 6)
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6 ### Model URL : https://huggingface.co/Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6 ### Model Description : #DialoGPT medium model (Harry Potter 1 through 4 plus 6)
Ayran/DialoGPT-small-gandalf
https://huggingface.co/Ayran/DialoGPT-small-gandalf
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayran/DialoGPT-small-gandalf ### Model URL : https://huggingface.co/Ayran/DialoGPT-small-gandalf ### Model Description :
Ayran/DialoGPT-small-harry-potter-1-through-3
https://huggingface.co/Ayran/DialoGPT-small-harry-potter-1-through-3
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayran/DialoGPT-small-harry-potter-1-through-3 ### Model URL : https://huggingface.co/Ayran/DialoGPT-small-harry-potter-1-through-3 ### Model Description :
Ayta/Haha
https://huggingface.co/Ayta/Haha
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayta/Haha ### Model URL : https://huggingface.co/Ayta/Haha ### Model Description : No model card New: Create and edit this model card directly on the website!
Ayu/Shiriro
https://huggingface.co/Ayu/Shiriro
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayu/Shiriro ### Model URL : https://huggingface.co/Ayu/Shiriro ### Model Description : No model card New: Create and edit this model card directly on the website!
Ayumi/Jovana
https://huggingface.co/Ayumi/Jovana
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ayumi/Jovana ### Model URL : https://huggingface.co/Ayumi/Jovana ### Model Description : No model card New: Create and edit this model card directly on the website!
AyushPJ/ai-club-inductions-21-nlp-ALBERT
https://huggingface.co/AyushPJ/ai-club-inductions-21-nlp-ALBERT
This model was trained from scratch on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AyushPJ/ai-club-inductions-21-nlp-ALBERT ### Model URL : https://huggingface.co/AyushPJ/ai-club-inductions-21-nlp-ALBERT ### Model Description : This model was trained from scratch on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
AyushPJ/ai-club-inductions-21-nlp-ELECTRA-base-squad
https://huggingface.co/AyushPJ/ai-club-inductions-21-nlp-ELECTRA-base-squad
This model is the deepset/electra-base-squad2 pre-trained model trained on data from AI Inductions 21 NLP competition (https://www.kaggle.com/c/ai-inductions-21-nlp) for extractive QA. More information needed AI Inductions 21 NLP competition AI Inductions 21 NLP competition data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AyushPJ/ai-club-inductions-21-nlp-ELECTRA-base-squad ### Model URL : https://huggingface.co/AyushPJ/ai-club-inductions-21-nlp-ELECTRA-base-squad ### Model Description : This model is the deepset/electra-base-squad2 pre-trained model trained on data from AI Inductions 21 NLP competition (https://www.kaggle.com/c/ai-inductions-21-nlp) for extractive QA. More information needed AI Inductions 21 NLP competition AI Inductions 21 NLP competition data The following hyperparameters were used during training:
AyushPJ/ai-club-inductions-21-nlp-XLNet
https://huggingface.co/AyushPJ/ai-club-inductions-21-nlp-XLNet
This model was trained from scratch on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AyushPJ/ai-club-inductions-21-nlp-XLNet ### Model URL : https://huggingface.co/AyushPJ/ai-club-inductions-21-nlp-XLNet ### Model Description : This model was trained from scratch on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
AyushPJ/ai-club-inductions-21-nlp-distilBERT
https://huggingface.co/AyushPJ/ai-club-inductions-21-nlp-distilBERT
This model was trained from scratch on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AyushPJ/ai-club-inductions-21-nlp-distilBERT ### Model URL : https://huggingface.co/AyushPJ/ai-club-inductions-21-nlp-distilBERT ### Model Description : This model was trained from scratch on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
AyushPJ/ai-club-inductions-21-nlp-roBERTa-base-squad-v2
https://huggingface.co/AyushPJ/ai-club-inductions-21-nlp-roBERTa-base-squad-v2
This model was trained from scratch on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AyushPJ/ai-club-inductions-21-nlp-roBERTa-base-squad-v2 ### Model URL : https://huggingface.co/AyushPJ/ai-club-inductions-21-nlp-roBERTa-base-squad-v2 ### Model Description : This model was trained from scratch on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
AyushPJ/ai-club-inductions-21-nlp-roBERTa
https://huggingface.co/AyushPJ/ai-club-inductions-21-nlp-roBERTa
This model was trained from scratch on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AyushPJ/ai-club-inductions-21-nlp-roBERTa ### Model URL : https://huggingface.co/AyushPJ/ai-club-inductions-21-nlp-roBERTa ### Model Description : This model was trained from scratch on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
AyushPJ/test-squad-trained-finetuned-squad
https://huggingface.co/AyushPJ/test-squad-trained-finetuned-squad
This model was trained from scratch on the squad dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : AyushPJ/test-squad-trained-finetuned-squad ### Model URL : https://huggingface.co/AyushPJ/test-squad-trained-finetuned-squad ### Model Description : This model was trained from scratch on the squad dataset. More information needed More information needed More information needed The following hyperparameters were used during training:
Azaghast/DistilBART-SCP-ParaSummarization
https://huggingface.co/Azaghast/DistilBART-SCP-ParaSummarization
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Azaghast/DistilBART-SCP-ParaSummarization ### Model URL : https://huggingface.co/Azaghast/DistilBART-SCP-ParaSummarization ### Model Description : No model card New: Create and edit this model card directly on the website!
Azaghast/DistilBERT-SCP-Class-Classification
https://huggingface.co/Azaghast/DistilBERT-SCP-Class-Classification
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Azaghast/DistilBERT-SCP-Class-Classification ### Model URL : https://huggingface.co/Azaghast/DistilBERT-SCP-Class-Classification ### Model Description : No model card New: Create and edit this model card directly on the website!
Azaghast/GPT2-SCP-ContainmentProcedures
https://huggingface.co/Azaghast/GPT2-SCP-ContainmentProcedures
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Azaghast/GPT2-SCP-ContainmentProcedures ### Model URL : https://huggingface.co/Azaghast/GPT2-SCP-ContainmentProcedures ### Model Description : No model card New: Create and edit this model card directly on the website!
Azaghast/GPT2-SCP-Descriptions
https://huggingface.co/Azaghast/GPT2-SCP-Descriptions
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Azaghast/GPT2-SCP-Descriptions ### Model URL : https://huggingface.co/Azaghast/GPT2-SCP-Descriptions ### Model Description : No model card New: Create and edit this model card directly on the website!
Azaghast/GPT2-SCP-Miscellaneous
https://huggingface.co/Azaghast/GPT2-SCP-Miscellaneous
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Azaghast/GPT2-SCP-Miscellaneous ### Model URL : https://huggingface.co/Azaghast/GPT2-SCP-Miscellaneous ### Model Description : No model card New: Create and edit this model card directly on the website!
Azizun/Geotrend-10-epochs
https://huggingface.co/Azizun/Geotrend-10-epochs
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Azizun/Geotrend-10-epochs ### Model URL : https://huggingface.co/Azizun/Geotrend-10-epochs ### Model Description : No model card New: Create and edit this model card directly on the website!
Azura/data
https://huggingface.co/Azura/data
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Azura/data ### Model URL : https://huggingface.co/Azura/data ### Model Description : No model card New: Create and edit this model card directly on the website!
Azuris/DialoGPT-medium-envy
https://huggingface.co/Azuris/DialoGPT-medium-envy
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Azuris/DialoGPT-medium-envy ### Model URL : https://huggingface.co/Azuris/DialoGPT-medium-envy ### Model Description :
Azuris/DialoGPT-medium-senorita
https://huggingface.co/Azuris/DialoGPT-medium-senorita
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Azuris/DialoGPT-medium-senorita ### Model URL : https://huggingface.co/Azuris/DialoGPT-medium-senorita ### Model Description :
Azuris/DialoGPT-small-envy
https://huggingface.co/Azuris/DialoGPT-small-envy
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Azuris/DialoGPT-small-envy ### Model URL : https://huggingface.co/Azuris/DialoGPT-small-envy ### Model Description :
BAHIJA/distilbert-base-uncased-finetuned-cola
https://huggingface.co/BAHIJA/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BAHIJA/distilbert-base-uncased-finetuned-cola ### Model URL : https://huggingface.co/BAHIJA/distilbert-base-uncased-finetuned-cola ### Model Description : This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
BE/demo-sentiment2021
https://huggingface.co/BE/demo-sentiment2021
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BE/demo-sentiment2021 ### Model URL : https://huggingface.co/BE/demo-sentiment2021 ### Model Description : No model card New: Create and edit this model card directly on the website!
BJTK2/model_name
https://huggingface.co/BJTK2/model_name
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BJTK2/model_name ### Model URL : https://huggingface.co/BJTK2/model_name ### Model Description : No model card New: Create and edit this model card directly on the website!
BME-TMIT/foszt2oszt
https://huggingface.co/BME-TMIT/foszt2oszt
Paper We publish an abstractive summarizer for Hungarian, an encoder-decoder model initialized with huBERT, and fine-tuned on the ELTE.DH corpus of former Hungarian news portals. The model produces fluent output in the correct topic, but it hallucinates frequently. Our quantitative evaluation on automatic and human transcripts of news (with automatic and human-made punctuation, Tündik et al. (2019), Tündik and Szaszák (2019)) shows that the model is robust with respect to errors in either automatic speech recognition or automatic punctuation restoration. In fine-tuning and inference, we followed a jupyter notebook by Patrick von Platen. Most hyper-parameters are the same as those by von Platen, but we found it advantageous to change the minimum length of the summary to 8 word- pieces (instead of 56), and the number of beams in beam search to 5 (instead of 4). Our model was fine-tuned on a server of the SZTAKI-HLT group, which kindly provided access to it.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BME-TMIT/foszt2oszt ### Model URL : https://huggingface.co/BME-TMIT/foszt2oszt ### Model Description : Paper We publish an abstractive summarizer for Hungarian, an encoder-decoder model initialized with huBERT, and fine-tuned on the ELTE.DH corpus of former Hungarian news portals. The model produces fluent output in the correct topic, but it hallucinates frequently. Our quantitative evaluation on automatic and human transcripts of news (with automatic and human-made punctuation, Tündik et al. (2019), Tündik and Szaszák (2019)) shows that the model is robust with respect to errors in either automatic speech recognition or automatic punctuation restoration. In fine-tuning and inference, we followed a jupyter notebook by Patrick von Platen. Most hyper-parameters are the same as those by von Platen, but we found it advantageous to change the minimum length of the summary to 8 word- pieces (instead of 56), and the number of beams in beam search to 5 (instead of 4). Our model was fine-tuned on a server of the SZTAKI-HLT group, which kindly provided access to it.
BOON/electra-xlnet
https://huggingface.co/BOON/electra-xlnet
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BOON/electra-xlnet ### Model URL : https://huggingface.co/BOON/electra-xlnet ### Model Description : No model card New: Create and edit this model card directly on the website!
BOON/electra_qa
https://huggingface.co/BOON/electra_qa
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BOON/electra_qa ### Model URL : https://huggingface.co/BOON/electra_qa ### Model Description : No model card New: Create and edit this model card directly on the website!
BSC-LT/RoBERTalex
https://huggingface.co/BSC-LT/RoBERTalex
⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/RoBERTalex There are few models trained for the Spanish language. Some of the models have been trained with a low resource, unclean corpora. The ones derived from the Spanish National Plan for Language Technologies are proficient solving several tasks and have been trained using large scale clean corpora. However, the Spanish Legal domain language could be think of an independent language on its own. We therefore created a Spanish Legal model from scratch trained exclusively on legal corpora. For more information visit our GitHub repository This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BSC-LT/RoBERTalex ### Model URL : https://huggingface.co/BSC-LT/RoBERTalex ### Model Description : ⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/RoBERTalex There are few models trained for the Spanish language. Some of the models have been trained with a low resource, unclean corpora. The ones derived from the Spanish National Plan for Language Technologies are proficient solving several tasks and have been trained using large scale clean corpora. However, the Spanish Legal domain language could be think of an independent language on its own. We therefore created a Spanish Legal model from scratch trained exclusively on legal corpora. For more information visit our GitHub repository This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
BSC-LT/gpt2-large-bne
https://huggingface.co/BSC-LT/gpt2-large-bne
GPT2-large-bne is a transformer-based model for the Spanish language. It is based on the GPT-2 model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. The National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text. Some of the statistics of the corpus: The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original GPT-2 model with a vocabulary size of 50,262 tokens. The GPT2-large-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2. The training lasted a total of 10 days with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BSC-LT/gpt2-large-bne ### Model URL : https://huggingface.co/BSC-LT/gpt2-large-bne ### Model Description : GPT2-large-bne is a transformer-based model for the Spanish language. It is based on the GPT-2 model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. The National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text. Some of the statistics of the corpus: The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original GPT-2 model with a vocabulary size of 50,262 tokens. The GPT2-large-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2. The training lasted a total of 10 days with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
BSC-LT/roberta-base-biomedical-clinical-es
https://huggingface.co/BSC-LT/roberta-base-biomedical-clinical-es
⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es Biomedical pretrained language model for Spanish. For more details about the corpus, the pretraining and the evaluation, check the official repository and read our preprint "Carrino, C. P., Armengol-Estapé, J., Gutiérrez-Fandiño, A., Llop-Palao, J., Pàmies, M., Gonzalez-Agirre, A., & Villegas, M. (2021). Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario.". This model is a RoBERTa-based model trained on a biomedical-clinical corpus in Spanish collected from several sources (see next section). The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences. The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers, and a real-world clinical corpus collected from more than 278K clinical documents and notes. To obtain a high-quality training corpus while retaining the idiosyncrasies of the clinical language, a cleaning pipeline has been applied only to the biomedical corpora, keeping the clinical corpus uncleaned. Essentially, the cleaning operations used are: Then, the biomedical corpora are concatenated and further global deduplication among the biomedical corpora have been applied. Eventually, the clinical corpus is concatenated to the cleaned biomedical corpus resulting in a medium-size biomedical-clinical corpus for Spanish composed of more than 1B tokens. The table below shows some basic statistics of the individual cleaned corpora: The model has been evaluated on the Named Entity Recognition (NER) using the following datasets: PharmaCoNER: is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/). CANTEMIST: is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: https://zenodo.org/record/3978041#.YTt5qH2xXbQ). ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables. The evaluation results are compared against the mBERT and BETO models: The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section) However, the is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification. If you use our models, please cite our latest preprint: If you use our Medical Crawler corpus, please cite the preprint:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BSC-LT/roberta-base-biomedical-clinical-es ### Model URL : https://huggingface.co/BSC-LT/roberta-base-biomedical-clinical-es ### Model Description : ⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es Biomedical pretrained language model for Spanish. For more details about the corpus, the pretraining and the evaluation, check the official repository and read our preprint "Carrino, C. P., Armengol-Estapé, J., Gutiérrez-Fandiño, A., Llop-Palao, J., Pàmies, M., Gonzalez-Agirre, A., & Villegas, M. (2021). Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario.". This model is a RoBERTa-based model trained on a biomedical-clinical corpus in Spanish collected from several sources (see next section). The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences. The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers, and a real-world clinical corpus collected from more than 278K clinical documents and notes. To obtain a high-quality training corpus while retaining the idiosyncrasies of the clinical language, a cleaning pipeline has been applied only to the biomedical corpora, keeping the clinical corpus uncleaned. Essentially, the cleaning operations used are: Then, the biomedical corpora are concatenated and further global deduplication among the biomedical corpora have been applied. Eventually, the clinical corpus is concatenated to the cleaned biomedical corpus resulting in a medium-size biomedical-clinical corpus for Spanish composed of more than 1B tokens. The table below shows some basic statistics of the individual cleaned corpora: The model has been evaluated on the Named Entity Recognition (NER) using the following datasets: PharmaCoNER: is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/). CANTEMIST: is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: https://zenodo.org/record/3978041#.YTt5qH2xXbQ). ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables. The evaluation results are compared against the mBERT and BETO models: The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section) However, the is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification. If you use our models, please cite our latest preprint: If you use our Medical Crawler corpus, please cite the preprint:
BSC-LT/roberta-base-biomedical-es
https://huggingface.co/BSC-LT/roberta-base-biomedical-es
⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-es Biomedical pretrained language model for Spanish. For more details about the corpus, the pretraining and the evaluation, check the official repository and read our preprint "Carrino, C. P., Armengol-Estapé, J., Gutiérrez-Fandiño, A., Llop-Palao, J., Pàmies, M., Gonzalez-Agirre, A., & Villegas, M. (2021). Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario.". This model is a RoBERTa-based model trained on a biomedical corpus in Spanish collected from several sources (see next section). The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences. The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers. To obtain a high-quality training corpus, a cleaning pipeline with the following operations has been applied: Finally, the corpora are concatenated and further global deduplication among the corpora have been applied. The result is a medium-size biomedical corpus for Spanish composed of about 963M tokens. The table below shows some basic statistics of the individual cleaned corpora: The model has been evaluated on the Named Entity Recognition (NER) using the following datasets: PharmaCoNER: is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/). CANTEMIST: is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: https://zenodo.org/record/3978041#.YTt5qH2xXbQ). ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables. The evaluation results are compared against the mBERT and BETO models: The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section) However, the is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification. If you use our models, please cite our latest preprint: If you use our Medical Crawler corpus, please cite the preprint:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BSC-LT/roberta-base-biomedical-es ### Model URL : https://huggingface.co/BSC-LT/roberta-base-biomedical-es ### Model Description : ⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-es Biomedical pretrained language model for Spanish. For more details about the corpus, the pretraining and the evaluation, check the official repository and read our preprint "Carrino, C. P., Armengol-Estapé, J., Gutiérrez-Fandiño, A., Llop-Palao, J., Pàmies, M., Gonzalez-Agirre, A., & Villegas, M. (2021). Biomedical and Clinical Language Models for Spanish: On the Benefits of Domain-Specific Pretraining in a Mid-Resource Scenario.". This model is a RoBERTa-based model trained on a biomedical corpus in Spanish collected from several sources (see next section). The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences. The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers. To obtain a high-quality training corpus, a cleaning pipeline with the following operations has been applied: Finally, the corpora are concatenated and further global deduplication among the corpora have been applied. The result is a medium-size biomedical corpus for Spanish composed of about 963M tokens. The table below shows some basic statistics of the individual cleaned corpora: The model has been evaluated on the Named Entity Recognition (NER) using the following datasets: PharmaCoNER: is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/). CANTEMIST: is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: https://zenodo.org/record/3978041#.YTt5qH2xXbQ). ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables. The evaluation results are compared against the mBERT and BETO models: The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section) However, the is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification. If you use our models, please cite our latest preprint: If you use our Medical Crawler corpus, please cite the preprint:
BSC-LT/roberta-base-bne-capitel-ner-plus
https://huggingface.co/BSC-LT/roberta-base-bne-capitel-ner-plus
⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-ner-plus RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne The dataset used is the one from the CAPITEL competition at IberLEF 2020 (sub-task 1). IMPORTANT ABOUT THIS MODEL: We modified the dataset to make this model more robust to general Spanish input. In the Spanish language all the name entities are capitalized, as this dataset has been elaborated by experts, it is totally correct in terms of Spanish language. We randomly took some entities and we lower-cased some of them for the model to learn not only that the named entities are capitalized, but also the structure of a sentence that should contain a named entity. For instance: "My name is [placeholder]", this [placeholder] should be a named entity even though it is not written capitalized. The model trained on the original capitel dataset can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne-capitel-ner Examples: This model: Model trained on original data: F1 Score: 0.8867 For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BSC-LT/roberta-base-bne-capitel-ner-plus ### Model URL : https://huggingface.co/BSC-LT/roberta-base-bne-capitel-ner-plus ### Model Description : ⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-ner-plus RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne The dataset used is the one from the CAPITEL competition at IberLEF 2020 (sub-task 1). IMPORTANT ABOUT THIS MODEL: We modified the dataset to make this model more robust to general Spanish input. In the Spanish language all the name entities are capitalized, as this dataset has been elaborated by experts, it is totally correct in terms of Spanish language. We randomly took some entities and we lower-cased some of them for the model to learn not only that the named entities are capitalized, but also the structure of a sentence that should contain a named entity. For instance: "My name is [placeholder]", this [placeholder] should be a named entity even though it is not written capitalized. The model trained on the original capitel dataset can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne-capitel-ner Examples: This model: Model trained on original data: F1 Score: 0.8867 For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
BSC-LT/roberta-base-bne-capitel-ner
https://huggingface.co/BSC-LT/roberta-base-bne-capitel-ner
⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-ner RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne The dataset used is the one from the CAPITEL competition at IberLEF 2020 (sub-task 1). F1 Score: 0.8960 For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BSC-LT/roberta-base-bne-capitel-ner ### Model URL : https://huggingface.co/BSC-LT/roberta-base-bne-capitel-ner ### Model Description : ⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-ner RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne The dataset used is the one from the CAPITEL competition at IberLEF 2020 (sub-task 1). F1 Score: 0.8960 For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
BSC-LT/roberta-base-bne-capitel-pos
https://huggingface.co/BSC-LT/roberta-base-bne-capitel-pos
⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-pos RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne The dataset used is the one from the CAPITEL competition at IberLEF 2020 (sub-task 2). F1 Score: 0.9846 (average of 5 runs). For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BSC-LT/roberta-base-bne-capitel-pos ### Model URL : https://huggingface.co/BSC-LT/roberta-base-bne-capitel-pos ### Model Description : ⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-capitel-pos RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne The dataset used is the one from the CAPITEL competition at IberLEF 2020 (sub-task 2). F1 Score: 0.9846 (average of 5 runs). For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
BSC-LT/roberta-base-bne-sqac
https://huggingface.co/BSC-LT/roberta-base-bne-sqac
⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-sqac RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne The dataset used is the SQAC corpus. F1 Score: 0.7923 (average of 5 runs). For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BSC-LT/roberta-base-bne-sqac ### Model URL : https://huggingface.co/BSC-LT/roberta-base-bne-sqac ### Model Description : ⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne-sqac RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne The dataset used is the SQAC corpus. F1 Score: 0.7923 (average of 5 runs). For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
BSC-LT/roberta-base-bne
https://huggingface.co/BSC-LT/roberta-base-bne
⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. The National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text. Some of the statistics of the corpus: The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 50,262 tokens. The RoBERTa-base-bne pre-training consists of a masked language model training that follows the approach employed for the RoBERTa base. The training lasted a total of 48 hours with 16 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BSC-LT/roberta-base-bne ### Model URL : https://huggingface.co/BSC-LT/roberta-base-bne ### Model Description : ⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. The National Library of Spain (Biblioteca Nacional de España) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text. Some of the statistics of the corpus: The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 50,262 tokens. The RoBERTa-base-bne pre-training consists of a masked language model training that follows the approach employed for the RoBERTa base. The training lasted a total of 48 hours with 16 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
BSC-LT/roberta-base-ca
https://huggingface.co/BSC-LT/roberta-base-ca
⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-base-ca If you use any of these resources (datasets or models) in your work, please cite our latest paper: BERTa is a transformer-based masked language model for the Catalan language. It is based on the RoBERTA base model and has been trained on a medium-size corpus collected from publicly available corpora and crawlers. The training corpus consists of several corpora gathered from web crawling and public corpora. The publicly available corpora are: the Catalan part of the DOGC corpus, a set of documents from the Official Gazette of the Catalan Government the Catalan Open Subtitles, a collection of translated movie subtitles the non-shuffled version of the Catalan part of the OSCAR corpus \\cite{suarez2019asynchronous}, a collection of monolingual corpora, filtered from Common Crawl The CaWac corpus, a web corpus of Catalan built from the .cat top-level-domain in late 2013 the non-deduplicated version the Catalan Wikipedia articles downloaded on 18-08-2020. The crawled corpora are: The Catalan General Crawling, obtained by crawling the 500 most popular .cat and .ad domains the Catalan Government Crawling, obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government the ACN corpus with 220k news items from March 2015 until October 2020, crawled from the Catalan News Agency To obtain a high-quality training corpus, each corpus have preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process, we keep document boundaries are kept. Finally, the corpora are concatenated and further global deduplication among the corpora is applied. The final training corpus consists of about 1,8B tokens. The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 52,000 tokens. The BERTa pretraining consists of a masked language model training that follows the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM. The BERTa model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB), that has been created along with the model. It contains the following tasks and their related datasets: Part-of-Speech Tagging (POS) Catalan-Ancora: from the Universal Dependencies treebank of the well-known Ancora corpus Named Entity Recognition (NER) AnCora Catalan 2.0.0: extracted named entities from the original Ancora version, filtering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format Text Classification (TC) TeCla: consisting of 137k news pieces from the Catalan News Agency (ACN) corpus Semantic Textual Similarity (STS) Catalan semantic textual similarity: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them, scraped from the Catalan Textual Corpus Question Answering (QA): ViquiQuAD: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan. XQuAD: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a test set Here are the train/dev/test splits of the datasets: The fine-tuning on downstream tasks have been performed with the HuggingFace Transformers library Below the evaluation results on the CLUB tasks compared with the multilingual mBERT, XLM-RoBERTa models and the Catalan WikiBERT-ca model The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section) However, the is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification or Named Entity Recognition. Below, an example of how to use the masked language modelling task with a pipeline. This model was originally published as bsc/roberta-base-ca-cased.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BSC-LT/roberta-base-ca ### Model URL : https://huggingface.co/BSC-LT/roberta-base-ca ### Model Description : ⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-base-ca If you use any of these resources (datasets or models) in your work, please cite our latest paper: BERTa is a transformer-based masked language model for the Catalan language. It is based on the RoBERTA base model and has been trained on a medium-size corpus collected from publicly available corpora and crawlers. The training corpus consists of several corpora gathered from web crawling and public corpora. The publicly available corpora are: the Catalan part of the DOGC corpus, a set of documents from the Official Gazette of the Catalan Government the Catalan Open Subtitles, a collection of translated movie subtitles the non-shuffled version of the Catalan part of the OSCAR corpus \\cite{suarez2019asynchronous}, a collection of monolingual corpora, filtered from Common Crawl The CaWac corpus, a web corpus of Catalan built from the .cat top-level-domain in late 2013 the non-deduplicated version the Catalan Wikipedia articles downloaded on 18-08-2020. The crawled corpora are: The Catalan General Crawling, obtained by crawling the 500 most popular .cat and .ad domains the Catalan Government Crawling, obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government the ACN corpus with 220k news items from March 2015 until October 2020, crawled from the Catalan News Agency To obtain a high-quality training corpus, each corpus have preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process, we keep document boundaries are kept. Finally, the corpora are concatenated and further global deduplication among the corpora is applied. The final training corpus consists of about 1,8B tokens. The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 52,000 tokens. The BERTa pretraining consists of a masked language model training that follows the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM. The BERTa model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB), that has been created along with the model. It contains the following tasks and their related datasets: Part-of-Speech Tagging (POS) Catalan-Ancora: from the Universal Dependencies treebank of the well-known Ancora corpus Named Entity Recognition (NER) AnCora Catalan 2.0.0: extracted named entities from the original Ancora version, filtering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format Text Classification (TC) TeCla: consisting of 137k news pieces from the Catalan News Agency (ACN) corpus Semantic Textual Similarity (STS) Catalan semantic textual similarity: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them, scraped from the Catalan Textual Corpus Question Answering (QA): ViquiQuAD: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan. XQuAD: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a test set Here are the train/dev/test splits of the datasets: The fine-tuning on downstream tasks have been performed with the HuggingFace Transformers library Below the evaluation results on the CLUB tasks compared with the multilingual mBERT, XLM-RoBERTa models and the Catalan WikiBERT-ca model The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section) However, the is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification or Named Entity Recognition. Below, an example of how to use the masked language modelling task with a pipeline. This model was originally published as bsc/roberta-base-ca-cased.
BSC-LT/roberta-large-bne-capitel-ner
https://huggingface.co/BSC-LT/roberta-large-bne-capitel-ner
⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne-capitel-ner RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne The dataset used is the one from the CAPITEL competition at IberLEF 2020 (sub-task 1). F1 Score: 0.8998 For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BSC-LT/roberta-large-bne-capitel-ner ### Model URL : https://huggingface.co/BSC-LT/roberta-large-bne-capitel-ner ### Model Description : ⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne-capitel-ner RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne The dataset used is the one from the CAPITEL competition at IberLEF 2020 (sub-task 1). F1 Score: 0.8998 For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
BSC-LT/roberta-large-bne-capitel-pos
https://huggingface.co/BSC-LT/roberta-large-bne-capitel-pos
⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne-capitel-pos RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne The dataset used is the one from the CAPITEL competition at IberLEF 2020 (sub-task 2). F1 Score: 0.9851 (average of 5 runs). For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BSC-LT/roberta-large-bne-capitel-pos ### Model URL : https://huggingface.co/BSC-LT/roberta-large-bne-capitel-pos ### Model Description : ⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne-capitel-pos RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne The dataset used is the one from the CAPITEL competition at IberLEF 2020 (sub-task 2). F1 Score: 0.9851 (average of 5 runs). For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
BSC-LT/roberta-large-bne-sqac
https://huggingface.co/BSC-LT/roberta-large-bne-sqac
⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne-sqac RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne The dataset used is the SQAC corpus. F1 Score: 0.7993 (average of 5 runs). For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : BSC-LT/roberta-large-bne-sqac ### Model URL : https://huggingface.co/BSC-LT/roberta-large-bne-sqac ### Model Description : ⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED: https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne-sqac RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the RoBERTa large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019. Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne The dataset used is the SQAC corpus. F1 Score: 0.7993 (average of 5 runs). For evaluation details visit our GitHub repository. Check out our paper for all the details: https://arxiv.org/abs/2107.07253