Model Name
stringlengths
5
122
URL
stringlengths
28
145
Crawled Text
stringlengths
1
199k
text
stringlengths
180
199k
DaWang/demo
https://huggingface.co/DaWang/demo
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DaWang/demo ### Model URL : https://huggingface.co/DaWang/demo ### Model Description : No model card New: Create and edit this model card directly on the website!
Dablio/Dablio
https://huggingface.co/Dablio/Dablio
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dablio/Dablio ### Model URL : https://huggingface.co/Dablio/Dablio ### Model Description : No model card New: Create and edit this model card directly on the website!
Daiki/scibert_scivocab_uncased-finetuned-cola
https://huggingface.co/Daiki/scibert_scivocab_uncased-finetuned-cola
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Daiki/scibert_scivocab_uncased-finetuned-cola ### Model URL : https://huggingface.co/Daiki/scibert_scivocab_uncased-finetuned-cola ### Model Description : No model card New: Create and edit this model card directly on the website!
DaisyMak/bert-finetuned-squad-accelerate-10epoch_transformerfrozen
https://huggingface.co/DaisyMak/bert-finetuned-squad-accelerate-10epoch_transformerfrozen
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DaisyMak/bert-finetuned-squad-accelerate-10epoch_transformerfrozen ### Model URL : https://huggingface.co/DaisyMak/bert-finetuned-squad-accelerate-10epoch_transformerfrozen ### Model Description : No model card New: Create and edit this model card directly on the website!
DaisyMak/bert-finetuned-squad-transformerfrozen-testtoken
https://huggingface.co/DaisyMak/bert-finetuned-squad-transformerfrozen-testtoken
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DaisyMak/bert-finetuned-squad-transformerfrozen-testtoken ### Model URL : https://huggingface.co/DaisyMak/bert-finetuned-squad-transformerfrozen-testtoken ### Model Description : No model card New: Create and edit this model card directly on the website!
Daivakai/DialoGPT-small-saitama
https://huggingface.co/Daivakai/DialoGPT-small-saitama
#Saitama DialoGPT model
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Daivakai/DialoGPT-small-saitama ### Model URL : https://huggingface.co/Daivakai/DialoGPT-small-saitama ### Model Description : #Saitama DialoGPT model
Daltcamalea01/Camaleaodalt
https://huggingface.co/Daltcamalea01/Camaleaodalt
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Daltcamalea01/Camaleaodalt ### Model URL : https://huggingface.co/Daltcamalea01/Camaleaodalt ### Model Description : No model card New: Create and edit this model card directly on the website!
DamolaMack/Classyfied
https://huggingface.co/DamolaMack/Classyfied
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DamolaMack/Classyfied ### Model URL : https://huggingface.co/DamolaMack/Classyfied ### Model Description : No model card New: Create and edit this model card directly on the website!
DanBot/TCRsynth
https://huggingface.co/DanBot/TCRsynth
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DanBot/TCRsynth ### Model URL : https://huggingface.co/DanBot/TCRsynth ### Model Description : No model card New: Create and edit this model card directly on the website!
DanL/scientific-challenges-and-directions
https://huggingface.co/DanL/scientific-challenges-and-directions
We present a novel resource to help scientists and medical professionals discover challenges and potential directions across scientific literature, focusing on a broad corpus pertaining to the COVID-19 pandemic and related historical research. At a high level, the challenges and directions are defined as follows: Challenge: A sentence mentioning a problem, difficulty, flaw, limitation, failure, lack of clarity, or knowledge gap. Research direction: A sentence mentioning suggestions or needs for further research, hypotheses, speculations, indications or hints that an issue is worthy of exploration. This model here is described in our paper: A Search Engine for Discovery of Scientific Challenges and Directions (though we've upgraded the infrastructure since the paper was released - there are slight differences in the results). Our dataset can be found here. Please cite our paper if you use our datasets or models in your project. See the BibTeX. Feel free to email us. Also, check out our search engine, as an example application. This model is a fine-tuned version of PubMedBERT on the scientific-challenges-and-directions-dataset, designed for multi-label text classification. The scientific-challenges-and-directions model is trained based on a dataset that is a collection of 2894 sentences and their surrounding contexts, from 1786 full-text papers in the CORD-19 corpus, labeled for classification of challenges and directions by expert annotators with biomedical and bioNLP backgrounds. For full details on the train/test/split of the data see section 3.1 in our paper We include an example notebook that uses the model for inference in our repo. See Inference_Notebook.ipynb. A training notebook is also included. The following hyperparameters were used during training: The achieves the following results on the test set: If using our dataset and models, please cite: Please don't hesitate to reach out. Email: lahav@mail.tau.ac.il,tomh@allenai.org.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DanL/scientific-challenges-and-directions ### Model URL : https://huggingface.co/DanL/scientific-challenges-and-directions ### Model Description : We present a novel resource to help scientists and medical professionals discover challenges and potential directions across scientific literature, focusing on a broad corpus pertaining to the COVID-19 pandemic and related historical research. At a high level, the challenges and directions are defined as follows: Challenge: A sentence mentioning a problem, difficulty, flaw, limitation, failure, lack of clarity, or knowledge gap. Research direction: A sentence mentioning suggestions or needs for further research, hypotheses, speculations, indications or hints that an issue is worthy of exploration. This model here is described in our paper: A Search Engine for Discovery of Scientific Challenges and Directions (though we've upgraded the infrastructure since the paper was released - there are slight differences in the results). Our dataset can be found here. Please cite our paper if you use our datasets or models in your project. See the BibTeX. Feel free to email us. Also, check out our search engine, as an example application. This model is a fine-tuned version of PubMedBERT on the scientific-challenges-and-directions-dataset, designed for multi-label text classification. The scientific-challenges-and-directions model is trained based on a dataset that is a collection of 2894 sentences and their surrounding contexts, from 1786 full-text papers in the CORD-19 corpus, labeled for classification of challenges and directions by expert annotators with biomedical and bioNLP backgrounds. For full details on the train/test/split of the data see section 3.1 in our paper We include an example notebook that uses the model for inference in our repo. See Inference_Notebook.ipynb. A training notebook is also included. The following hyperparameters were used during training: The achieves the following results on the test set: If using our dataset and models, please cite: Please don't hesitate to reach out. Email: lahav@mail.tau.ac.il,tomh@allenai.org.
Danbi/distilgpt2-finetuned-wikitext2
https://huggingface.co/Danbi/distilgpt2-finetuned-wikitext2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Danbi/distilgpt2-finetuned-wikitext2 ### Model URL : https://huggingface.co/Danbi/distilgpt2-finetuned-wikitext2 ### Model Description : No model card New: Create and edit this model card directly on the website!
Danbi/distilroberta-base-finetuned-wikitext2
https://huggingface.co/Danbi/distilroberta-base-finetuned-wikitext2
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Danbi/distilroberta-base-finetuned-wikitext2 ### Model URL : https://huggingface.co/Danbi/distilroberta-base-finetuned-wikitext2 ### Model Description : No model card New: Create and edit this model card directly on the website!
Dandara/bertimbau-socioambiental
https://huggingface.co/Dandara/bertimbau-socioambiental
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dandara/bertimbau-socioambiental ### Model URL : https://huggingface.co/Dandara/bertimbau-socioambiental ### Model Description : No model card New: Create and edit this model card directly on the website!
Danih1502/t5-base-finetuned-en-to-de
https://huggingface.co/Danih1502/t5-base-finetuned-en-to-de
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Danih1502/t5-base-finetuned-en-to-de ### Model URL : https://huggingface.co/Danih1502/t5-base-finetuned-en-to-de ### Model Description : No model card New: Create and edit this model card directly on the website!
Danih1502/t5-small-finetuned-en-to-de
https://huggingface.co/Danih1502/t5-small-finetuned-en-to-de
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Danih1502/t5-small-finetuned-en-to-de ### Model URL : https://huggingface.co/Danih1502/t5-small-finetuned-en-to-de ### Model Description : No model card New: Create and edit this model card directly on the website!
DannyMichael/ECU911
https://huggingface.co/DannyMichael/ECU911
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DannyMichael/ECU911 ### Model URL : https://huggingface.co/DannyMichael/ECU911 ### Model Description : No model card New: Create and edit this model card directly on the website!
Darein/Def
https://huggingface.co/Darein/Def
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Darein/Def ### Model URL : https://huggingface.co/Darein/Def ### Model Description : No model card New: Create and edit this model card directly on the website!
DarkKibble/DialoGPT-medium-Tankman
https://huggingface.co/DarkKibble/DialoGPT-medium-Tankman
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DarkKibble/DialoGPT-medium-Tankman ### Model URL : https://huggingface.co/DarkKibble/DialoGPT-medium-Tankman ### Model Description : No model card New: Create and edit this model card directly on the website!
DarkWolf/kn-electra-small
https://huggingface.co/DarkWolf/kn-electra-small
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DarkWolf/kn-electra-small ### Model URL : https://huggingface.co/DarkWolf/kn-electra-small ### Model Description : No model card New: Create and edit this model card directly on the website!
Darkecho789/email-gen
https://huggingface.co/Darkecho789/email-gen
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Darkecho789/email-gen ### Model URL : https://huggingface.co/Darkecho789/email-gen ### Model Description : No model card New: Create and edit this model card directly on the website!
DarkestSky/distilbert-base-uncased-finetuned-ner
https://huggingface.co/DarkestSky/distilbert-base-uncased-finetuned-ner
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DarkestSky/distilbert-base-uncased-finetuned-ner ### Model URL : https://huggingface.co/DarkestSky/distilbert-base-uncased-finetuned-ner ### Model Description : No model card New: Create and edit this model card directly on the website!
Darkrider/covidbert_medmarco
https://huggingface.co/Darkrider/covidbert_medmarco
Fine-tuned CovidBERT on Med-Marco Dataset for passage ranking This is the model CovidBERT trained by DeepSet on AllenAI's CORD19 Dataset of scientific articles about coronaviruses. The model uses the original BERT wordpiece vocabulary and was subsequently fine-tuned on the SNLI and the MultiNLI datasets using the sentence-transformers library to produce universal sentence embeddings [1] using the average pooling strategy and a softmax loss. It is further fine-tuned Med-Marco Dataset. MacAvaney et.al in their paper titled “SLEDGE-Z: A Zero-Shot Baseline for COVID-19 Literature Search” used MedSyn a lexicon of layperson and expert terminology for various medical conditions to filter for medical questions. One can also replace this by UMLs ontologies but the beauty of MedSyn is that the terms are more general human conversation lingo and not terms based on scientific literature. Parameter details for the original training on CORD-19 are available on DeepSet's MLFlow Base model: deepset/covid_bert_base from HuggingFace's AutoModel.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Darkrider/covidbert_medmarco ### Model URL : https://huggingface.co/Darkrider/covidbert_medmarco ### Model Description : Fine-tuned CovidBERT on Med-Marco Dataset for passage ranking This is the model CovidBERT trained by DeepSet on AllenAI's CORD19 Dataset of scientific articles about coronaviruses. The model uses the original BERT wordpiece vocabulary and was subsequently fine-tuned on the SNLI and the MultiNLI datasets using the sentence-transformers library to produce universal sentence embeddings [1] using the average pooling strategy and a softmax loss. It is further fine-tuned Med-Marco Dataset. MacAvaney et.al in their paper titled “SLEDGE-Z: A Zero-Shot Baseline for COVID-19 Literature Search” used MedSyn a lexicon of layperson and expert terminology for various medical conditions to filter for medical questions. One can also replace this by UMLs ontologies but the beauty of MedSyn is that the terms are more general human conversation lingo and not terms based on scientific literature. Parameter details for the original training on CORD-19 are available on DeepSet's MLFlow Base model: deepset/covid_bert_base from HuggingFace's AutoModel.
Darkrider/covidbert_mednli
https://huggingface.co/Darkrider/covidbert_mednli
This is the model CovidBERT trained by DeepSet on AllenAI's CORD19 Dataset of scientific articles about coronaviruses. The model uses the original BERT wordpiece vocabulary and was subsequently fine-tuned on the SNLI and the MultiNLI datasets using the sentence-transformers library to produce universal sentence embeddings [1] using the average pooling strategy and a softmax loss. It is further fine-tuned on both MedNLI datasets available at Physionet. ACL-BIONLP 2019 MedNLI from MIMIC Parameter details for the original training on CORD-19 are available on DeepSet's MLFlow Base model: deepset/covid_bert_base from HuggingFace's AutoModel.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Darkrider/covidbert_mednli ### Model URL : https://huggingface.co/Darkrider/covidbert_mednli ### Model Description : This is the model CovidBERT trained by DeepSet on AllenAI's CORD19 Dataset of scientific articles about coronaviruses. The model uses the original BERT wordpiece vocabulary and was subsequently fine-tuned on the SNLI and the MultiNLI datasets using the sentence-transformers library to produce universal sentence embeddings [1] using the average pooling strategy and a softmax loss. It is further fine-tuned on both MedNLI datasets available at Physionet. ACL-BIONLP 2019 MedNLI from MIMIC Parameter details for the original training on CORD-19 are available on DeepSet's MLFlow Base model: deepset/covid_bert_base from HuggingFace's AutoModel.
Darren/darren
https://huggingface.co/Darren/darren
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Darren/darren ### Model URL : https://huggingface.co/Darren/darren ### Model Description : No model card New: Create and edit this model card directly on the website!
DarshanDeshpande/marathi-distilbert
https://huggingface.co/DarshanDeshpande/marathi-distilbert
This model is an adaptation of DistilBERT (Victor Sanh et al., 2019) for Marathi language. This version of Marathi-DistilBERT is trained from scratch on approximately 11.2 million sentences. The training data has been extracted from a variety of sources, mainly including: The data is cleaned by removing all languages other than Marathi, while preserving common punctuations The model is trained from scratch using an Adam optimizer with a learning rate of 1e-4 and default β1 and β2 values of 0.9 and 0.999 respectively with a total batch size of 256 on a v3-8 TPU and mask probability of 15%.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DarshanDeshpande/marathi-distilbert ### Model URL : https://huggingface.co/DarshanDeshpande/marathi-distilbert ### Model Description : This model is an adaptation of DistilBERT (Victor Sanh et al., 2019) for Marathi language. This version of Marathi-DistilBERT is trained from scratch on approximately 11.2 million sentences. The training data has been extracted from a variety of sources, mainly including: The data is cleaned by removing all languages other than Marathi, while preserving common punctuations The model is trained from scratch using an Adam optimizer with a learning rate of 1e-4 and default β1 and β2 values of 0.9 and 0.999 respectively with a total batch size of 256 on a v3-8 TPU and mask probability of 15%.
Darya/layoutlmv2-finetuned-funsd-test
https://huggingface.co/Darya/layoutlmv2-finetuned-funsd-test
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Darya/layoutlmv2-finetuned-funsd-test ### Model URL : https://huggingface.co/Darya/layoutlmv2-finetuned-funsd-test ### Model Description : No model card New: Create and edit this model card directly on the website!
Daryaflp/roberta-retrained_ru_covid
https://huggingface.co/Daryaflp/roberta-retrained_ru_covid
This model is a fine-tuned version of blinoff/roberta-base-russian-v0 on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Daryaflp/roberta-retrained_ru_covid ### Model URL : https://huggingface.co/Daryaflp/roberta-retrained_ru_covid ### Model Description : This model is a fine-tuned version of blinoff/roberta-base-russian-v0 on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
DataikuNLP/TinyBERT_General_4L_312D
https://huggingface.co/DataikuNLP/TinyBERT_General_4L_312D
This model is a copy of this model repository from Huawei Noah at the specific commit 34707a33cd59a94ecde241ac209bf35103691b43. TinyBERT is 7.5x smaller and 9.4x faster on inference than BERT-base and achieves competitive performances in the tasks of natural language understanding. It performs a novel transformer distillation at both the pre-training and task-specific learning stages. In general distillation, we use the original BERT-base without fine-tuning as the teacher and a large-scale text corpus as the learning data. By performing the Transformer distillation on the text from general domain, we obtain a general TinyBERT which provides a good initialization for the task-specific distillation. We here provide the general TinyBERT for your tasks at hand. For more details about the techniques of TinyBERT, refer to our paper: TinyBERT: Distilling BERT for Natural Language Understanding If you find TinyBERT useful in your research, please cite the following paper:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DataikuNLP/TinyBERT_General_4L_312D ### Model URL : https://huggingface.co/DataikuNLP/TinyBERT_General_4L_312D ### Model Description : This model is a copy of this model repository from Huawei Noah at the specific commit 34707a33cd59a94ecde241ac209bf35103691b43. TinyBERT is 7.5x smaller and 9.4x faster on inference than BERT-base and achieves competitive performances in the tasks of natural language understanding. It performs a novel transformer distillation at both the pre-training and task-specific learning stages. In general distillation, we use the original BERT-base without fine-tuning as the teacher and a large-scale text corpus as the learning data. By performing the Transformer distillation on the text from general domain, we obtain a general TinyBERT which provides a good initialization for the task-specific distillation. We here provide the general TinyBERT for your tasks at hand. For more details about the techniques of TinyBERT, refer to our paper: TinyBERT: Distilling BERT for Natural Language Understanding If you find TinyBERT useful in your research, please cite the following paper:
DataikuNLP/average_word_embeddings_glove.6B.300d
https://huggingface.co/DataikuNLP/average_word_embeddings_glove.6B.300d
This model is a copy of this model repository from sentence-transformers at the specific commit 5d2b7d1c127036ae98b9d487eca4d48744edc709. This is a sentence-transformers model: It maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net This model was trained by sentence-transformers. If you find this model helpful, feel free to cite our publication Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DataikuNLP/average_word_embeddings_glove.6B.300d ### Model URL : https://huggingface.co/DataikuNLP/average_word_embeddings_glove.6B.300d ### Model Description : This model is a copy of this model repository from sentence-transformers at the specific commit 5d2b7d1c127036ae98b9d487eca4d48744edc709. This is a sentence-transformers model: It maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net This model was trained by sentence-transformers. If you find this model helpful, feel free to cite our publication Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks:
DataikuNLP/camembert-base
https://huggingface.co/DataikuNLP/camembert-base
This model is a copy of this model repository at the specific commit 482393b6198924f9da270b1aaf37d238aafca99b. CamemBERT is a state-of-the-art language model for French based on the RoBERTa model. It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains. For further information or requests, please go to Camembert Website CamemBERT was trained and evaluated by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. If you use our work, please cite:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DataikuNLP/camembert-base ### Model URL : https://huggingface.co/DataikuNLP/camembert-base ### Model Description : This model is a copy of this model repository at the specific commit 482393b6198924f9da270b1aaf37d238aafca99b. CamemBERT is a state-of-the-art language model for French based on the RoBERTa model. It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains. For further information or requests, please go to Camembert Website CamemBERT was trained and evaluated by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. If you use our work, please cite:
DataikuNLP/distiluse-base-multilingual-cased-v1
https://huggingface.co/DataikuNLP/distiluse-base-multilingual-cased-v1
This model is a copy of this model repository from sentence-transformers at the specific commit 3a706e4d65c04f868c4684adfd4da74141be8732. This is a sentence-transformers model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net This model was trained by sentence-transformers. If you find this model helpful, feel free to cite our publication Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DataikuNLP/distiluse-base-multilingual-cased-v1 ### Model URL : https://huggingface.co/DataikuNLP/distiluse-base-multilingual-cased-v1 ### Model Description : This model is a copy of this model repository from sentence-transformers at the specific commit 3a706e4d65c04f868c4684adfd4da74141be8732. This is a sentence-transformers model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net This model was trained by sentence-transformers. If you find this model helpful, feel free to cite our publication Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks:
DataikuNLP/paraphrase-MiniLM-L6-v2
https://huggingface.co/DataikuNLP/paraphrase-MiniLM-L6-v2
This model is a copy of this model repository from sentence-transformers at the specific commit c4dfcde8a3e3e17e85cd4f0ec1925a266187f48e. This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net This model was trained by sentence-transformers. If you find this model helpful, feel free to cite our publication Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DataikuNLP/paraphrase-MiniLM-L6-v2 ### Model URL : https://huggingface.co/DataikuNLP/paraphrase-MiniLM-L6-v2 ### Model Description : This model is a copy of this model repository from sentence-transformers at the specific commit c4dfcde8a3e3e17e85cd4f0ec1925a266187f48e. This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net This model was trained by sentence-transformers. If you find this model helpful, feel free to cite our publication Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks:
DataikuNLP/paraphrase-albert-small-v2
https://huggingface.co/DataikuNLP/paraphrase-albert-small-v2
This model is a copy of this model repository from sentence-transformers at the specific commit 1eb1996223dd90a4c25be2fc52f6f336419a0d52. This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net This model was trained by sentence-transformers. If you find this model helpful, feel free to cite our publication Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DataikuNLP/paraphrase-albert-small-v2 ### Model URL : https://huggingface.co/DataikuNLP/paraphrase-albert-small-v2 ### Model Description : This model is a copy of this model repository from sentence-transformers at the specific commit 1eb1996223dd90a4c25be2fc52f6f336419a0d52. This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net This model was trained by sentence-transformers. If you find this model helpful, feel free to cite our publication Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks:
DataikuNLP/paraphrase-multilingual-MiniLM-L12-v2
https://huggingface.co/DataikuNLP/paraphrase-multilingual-MiniLM-L12-v2
This model is a copy of this model repository from sentence-transformers at the specific commit d66eff4d8a8598f264f166af8db67f7797164651. This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net This model was trained by sentence-transformers. If you find this model helpful, feel free to cite our publication Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DataikuNLP/paraphrase-multilingual-MiniLM-L12-v2 ### Model URL : https://huggingface.co/DataikuNLP/paraphrase-multilingual-MiniLM-L12-v2 ### Model Description : This model is a copy of this model repository from sentence-transformers at the specific commit d66eff4d8a8598f264f166af8db67f7797164651. This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net This model was trained by sentence-transformers. If you find this model helpful, feel free to cite our publication Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks:
Dave/twomad-model
https://huggingface.co/Dave/twomad-model
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dave/twomad-model ### Model URL : https://huggingface.co/Dave/twomad-model ### Model Description : No model card New: Create and edit this model card directly on the website!
DavidAMcIntosh/DialoGPT-small-rick
https://huggingface.co/DavidAMcIntosh/DialoGPT-small-rick
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DavidAMcIntosh/DialoGPT-small-rick ### Model URL : https://huggingface.co/DavidAMcIntosh/DialoGPT-small-rick ### Model Description : No model card New: Create and edit this model card directly on the website!
DavidAMcIntosh/small-rick
https://huggingface.co/DavidAMcIntosh/small-rick
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DavidAMcIntosh/small-rick ### Model URL : https://huggingface.co/DavidAMcIntosh/small-rick ### Model Description : No model card New: Create and edit this model card directly on the website!
DavidSpaceG/MSGIFSR
https://huggingface.co/DavidSpaceG/MSGIFSR
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DavidSpaceG/MSGIFSR ### Model URL : https://huggingface.co/DavidSpaceG/MSGIFSR ### Model Description : No model card New: Create and edit this model card directly on the website!
Davlan/bert-base-multilingual-cased-finetuned-amharic
https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-amharic
language: am datasets: bert-base-multilingual-cased-finetuned-amharic is a Amharic BERT model obtained by replacing mBERT vocabulary by amharic vocabulary because the language was not supported, and fine-tuning bert-base-multilingual-cased model on Amharic language texts. It provides better performance than the multilingual Amharic on named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Amharic corpus using Amharic vocabulary. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Amharic CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/bert-base-multilingual-cased-finetuned-amharic ### Model URL : https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-amharic ### Model Description : language: am datasets: bert-base-multilingual-cased-finetuned-amharic is a Amharic BERT model obtained by replacing mBERT vocabulary by amharic vocabulary because the language was not supported, and fine-tuning bert-base-multilingual-cased model on Amharic language texts. It provides better performance than the multilingual Amharic on named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Amharic corpus using Amharic vocabulary. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Amharic CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/bert-base-multilingual-cased-finetuned-hausa
https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-hausa
language: ha datasets: bert-base-multilingual-cased-finetuned-hausa is a Hausa BERT model obtained by fine-tuning bert-base-multilingual-cased model on Hausa language texts. It provides better performance than the multilingual BERT on text classification and named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Hausa corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Hausa CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/bert-base-multilingual-cased-finetuned-hausa ### Model URL : https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-hausa ### Model Description : language: ha datasets: bert-base-multilingual-cased-finetuned-hausa is a Hausa BERT model obtained by fine-tuning bert-base-multilingual-cased model on Hausa language texts. It provides better performance than the multilingual BERT on text classification and named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Hausa corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Hausa CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/bert-base-multilingual-cased-finetuned-igbo
https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-igbo
language: ig datasets: bert-base-multilingual-cased-finetuned-igbo is a Igbo BERT model obtained by fine-tuning bert-base-multilingual-cased model on Igbo language texts. It provides better performance than the multilingual BERT on text classification and named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Igbo corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 + OPUS CC-Align + IGBO NLP Corpus +Igbo CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/bert-base-multilingual-cased-finetuned-igbo ### Model URL : https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-igbo ### Model Description : language: ig datasets: bert-base-multilingual-cased-finetuned-igbo is a Igbo BERT model obtained by fine-tuning bert-base-multilingual-cased model on Igbo language texts. It provides better performance than the multilingual BERT on text classification and named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Igbo corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 + OPUS CC-Align + IGBO NLP Corpus +Igbo CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/bert-base-multilingual-cased-finetuned-kinyarwanda
https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-kinyarwanda
language: rw datasets: bert-base-multilingual-cased-finetuned-kinyarwanda is a Kinyarwanda BERT model obtained by fine-tuning bert-base-multilingual-cased model on Kinyarwanda language texts. It provides better performance than the multilingual BERT on named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Kinyarwanda corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 + KIRNEWS + BBC Gahuza This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/bert-base-multilingual-cased-finetuned-kinyarwanda ### Model URL : https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-kinyarwanda ### Model Description : language: rw datasets: bert-base-multilingual-cased-finetuned-kinyarwanda is a Kinyarwanda BERT model obtained by fine-tuning bert-base-multilingual-cased model on Kinyarwanda language texts. It provides better performance than the multilingual BERT on named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Kinyarwanda corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 + KIRNEWS + BBC Gahuza This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/bert-base-multilingual-cased-finetuned-luganda
https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-luganda
language: lg datasets: bert-base-multilingual-cased-finetuned-luganda is a Luganda BERT model obtained by fine-tuning bert-base-multilingual-cased model on Luganda language texts. It provides better performance than the multilingual BERT on text classification and named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Luganda corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 + BUKKEDDE +Luganda CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/bert-base-multilingual-cased-finetuned-luganda ### Model URL : https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-luganda ### Model Description : language: lg datasets: bert-base-multilingual-cased-finetuned-luganda is a Luganda BERT model obtained by fine-tuning bert-base-multilingual-cased model on Luganda language texts. It provides better performance than the multilingual BERT on text classification and named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Luganda corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 + BUKKEDDE +Luganda CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/bert-base-multilingual-cased-finetuned-luo
https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-luo
language: luo datasets: bert-base-multilingual-cased-finetuned-luo is a Luo BERT model obtained by fine-tuning bert-base-multilingual-cased model on Luo language texts. It provides better performance than the multilingual BERT on named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Luo corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/bert-base-multilingual-cased-finetuned-luo ### Model URL : https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-luo ### Model Description : language: luo datasets: bert-base-multilingual-cased-finetuned-luo is a Luo BERT model obtained by fine-tuning bert-base-multilingual-cased model on Luo language texts. It provides better performance than the multilingual BERT on named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Luo corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/bert-base-multilingual-cased-finetuned-naija
https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-naija
language: pcm datasets: bert-base-multilingual-cased-finetuned-naija is a Nigerian-Pidgin BERT model obtained by fine-tuning bert-base-multilingual-cased model on Nigerian-Pidgin language texts. It provides better performance than the multilingual BERT on named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Nigerian-Pidgin corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 + BBC Pidgin This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/bert-base-multilingual-cased-finetuned-naija ### Model URL : https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-naija ### Model Description : language: pcm datasets: bert-base-multilingual-cased-finetuned-naija is a Nigerian-Pidgin BERT model obtained by fine-tuning bert-base-multilingual-cased model on Nigerian-Pidgin language texts. It provides better performance than the multilingual BERT on named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Nigerian-Pidgin corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 + BBC Pidgin This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/bert-base-multilingual-cased-finetuned-swahili
https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-swahili
language: ha datasets: bert-base-multilingual-cased-finetuned-swahili is a Swahili BERT model obtained by fine-tuning bert-base-multilingual-cased model on Swahili language texts. It provides better performance than the multilingual BERT on text classification and named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Swahili corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Swahili CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/bert-base-multilingual-cased-finetuned-swahili ### Model URL : https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-swahili ### Model Description : language: ha datasets: bert-base-multilingual-cased-finetuned-swahili is a Swahili BERT model obtained by fine-tuning bert-base-multilingual-cased model on Swahili language texts. It provides better performance than the multilingual BERT on text classification and named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Swahili corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Swahili CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/bert-base-multilingual-cased-finetuned-wolof
https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-wolof
language: wo datasets: bert-base-multilingual-cased-finetuned-wolof is a Wolof BERT model obtained by fine-tuning bert-base-multilingual-cased model on Wolof language texts. It provides better performance than the multilingual BERT on named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Wolof corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Bible OT + OPUS + News Corpora (Lu Defu Waxu, Saabal, and Wolof Online) This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/bert-base-multilingual-cased-finetuned-wolof ### Model URL : https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-wolof ### Model Description : language: wo datasets: bert-base-multilingual-cased-finetuned-wolof is a Wolof BERT model obtained by fine-tuning bert-base-multilingual-cased model on Wolof language texts. It provides better performance than the multilingual BERT on named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Wolof corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Bible OT + OPUS + News Corpora (Lu Defu Waxu, Saabal, and Wolof Online) This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/bert-base-multilingual-cased-finetuned-yoruba
https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-yoruba
language: yo datasets: bert-base-multilingual-cased-finetuned-yoruba is a Yoruba BERT model obtained by fine-tuning bert-base-multilingual-cased model on Yorùbá language texts. It provides better performance than the multilingual BERT on text classification and named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Yorùbá corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Bible, JW300, Menyo-20k, Yoruba Embedding corpus and CC-Aligned, Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends. This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/bert-base-multilingual-cased-finetuned-yoruba ### Model URL : https://huggingface.co/Davlan/bert-base-multilingual-cased-finetuned-yoruba ### Model Description : language: yo datasets: bert-base-multilingual-cased-finetuned-yoruba is a Yoruba BERT model obtained by fine-tuning bert-base-multilingual-cased model on Yorùbá language texts. It provides better performance than the multilingual BERT on text classification and named entity recognition datasets. Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on Yorùbá corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Bible, JW300, Menyo-20k, Yoruba Embedding corpus and CC-Aligned, Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends. This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/bert-base-multilingual-cased-masakhaner
https://huggingface.co/Davlan/bert-base-multilingual-cased-masakhaner
language: datasets: bert-base-multilingual-cased-masakhaner is the first Named Entity Recognition model for 9 African languages (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) based on a fine-tuned mBERT base model. It achieves the state-of-the-art performance for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER). Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane MasakhaNER dataset. You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on 9 African NER datasets (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) Masakhane MasakhaNER dataset The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the original MasakhaNER paper which trained & evaluated the model on MasakhaNER corpus.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/bert-base-multilingual-cased-masakhaner ### Model URL : https://huggingface.co/Davlan/bert-base-multilingual-cased-masakhaner ### Model Description : language: datasets: bert-base-multilingual-cased-masakhaner is the first Named Entity Recognition model for 9 African languages (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) based on a fine-tuned mBERT base model. It achieves the state-of-the-art performance for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER). Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane MasakhaNER dataset. You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on 9 African NER datasets (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) Masakhane MasakhaNER dataset The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the original MasakhaNER paper which trained & evaluated the model on MasakhaNER corpus.
Davlan/bert-base-multilingual-cased-ner-hrl
https://huggingface.co/Davlan/bert-base-multilingual-cased-ner-hrl
language: bert-base-multilingual-cased-ner-hrl is a Named Entity Recognition model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned mBERT base model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on an aggregation of 10 high-resourced languages You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. The training data for the 10 languages are from: The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/bert-base-multilingual-cased-ner-hrl ### Model URL : https://huggingface.co/Davlan/bert-base-multilingual-cased-ner-hrl ### Model Description : language: bert-base-multilingual-cased-ner-hrl is a Named Entity Recognition model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned mBERT base model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a bert-base-multilingual-cased model that was fine-tuned on an aggregation of 10 high-resourced languages You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. The training data for the 10 languages are from: The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.
Davlan/byt5-base-eng-yor-mt
https://huggingface.co/Davlan/byt5-base-eng-yor-mt
language: byt5-base-eng-yor-mt is a machine translation model from English language to Yorùbá language based on a fine-tuned byt5-base model. It establishes a strong baseline for automatically translating texts from English to Yorùbá. Specifically, this model is a byt5-base model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k This model is limited by its training dataset. This may not generalize well for all use cases in different domains. This model was fine-tuned on on JW300 corpus and Menyo-20k dataset This model was trained on NVIDIA V100 GPU Fine-tuning byt5-base achieves 12.23 BLEU on Menyo-20k test set while mt5-base achieves 9.82 By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/byt5-base-eng-yor-mt ### Model URL : https://huggingface.co/Davlan/byt5-base-eng-yor-mt ### Model Description : language: byt5-base-eng-yor-mt is a machine translation model from English language to Yorùbá language based on a fine-tuned byt5-base model. It establishes a strong baseline for automatically translating texts from English to Yorùbá. Specifically, this model is a byt5-base model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k This model is limited by its training dataset. This may not generalize well for all use cases in different domains. This model was fine-tuned on on JW300 corpus and Menyo-20k dataset This model was trained on NVIDIA V100 GPU Fine-tuning byt5-base achieves 12.23 BLEU on Menyo-20k test set while mt5-base achieves 9.82 By David Adelani
Davlan/byt5-base-yor-eng-mt
https://huggingface.co/Davlan/byt5-base-yor-eng-mt
language: byt5-base-yor-eng-mt is a machine translation model from Yorùbá language to English language based on a fine-tuned byt5-base model. It establishes a strong baseline for automatically translating texts from Yorùbá to English. Specifically, this model is a byt5-base model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k This model is limited by its training dataset. This may not generalize well for all use cases in different domains. This model was fine-tuned on on JW300 corpus and Menyo-20k dataset This model was trained on NVIDIA V100 GPU Fine-tuning byt5-base achieves 14.05 BLEU on Menyo-20k test set while mt5-base achieves 15.57 By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/byt5-base-yor-eng-mt ### Model URL : https://huggingface.co/Davlan/byt5-base-yor-eng-mt ### Model Description : language: byt5-base-yor-eng-mt is a machine translation model from Yorùbá language to English language based on a fine-tuned byt5-base model. It establishes a strong baseline for automatically translating texts from Yorùbá to English. Specifically, this model is a byt5-base model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k This model is limited by its training dataset. This may not generalize well for all use cases in different domains. This model was fine-tuned on on JW300 corpus and Menyo-20k dataset This model was trained on NVIDIA V100 GPU Fine-tuning byt5-base achieves 14.05 BLEU on Menyo-20k test set while mt5-base achieves 15.57 By David Adelani
Davlan/distilbert-base-multilingual-cased-masakhaner
https://huggingface.co/Davlan/distilbert-base-multilingual-cased-masakhaner
language: datasets: distilbert-base-multilingual-cased-masakhaner is the first Named Entity Recognition model for 9 African languages (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) based on a fine-tuned BERT base model. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER). Specifically, this model is a distilbert-base-multilingual-cased model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane MasakhaNER dataset. You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on 9 African NER datasets (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) Masakhane MasakhaNER dataset The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the original MasakhaNER paper which trained & evaluated the model on MasakhaNER corpus.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/distilbert-base-multilingual-cased-masakhaner ### Model URL : https://huggingface.co/Davlan/distilbert-base-multilingual-cased-masakhaner ### Model Description : language: datasets: distilbert-base-multilingual-cased-masakhaner is the first Named Entity Recognition model for 9 African languages (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) based on a fine-tuned BERT base model. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER). Specifically, this model is a distilbert-base-multilingual-cased model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane MasakhaNER dataset. You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on 9 African NER datasets (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) Masakhane MasakhaNER dataset The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the original MasakhaNER paper which trained & evaluated the model on MasakhaNER corpus.
Davlan/distilbert-base-multilingual-cased-ner-hrl
https://huggingface.co/Davlan/distilbert-base-multilingual-cased-ner-hrl
language: distilbert-base-multilingual-cased-ner-hrl is a Named Entity Recognition model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned Distiled BERT base model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a distilbert-base-multilingual-cased model that was fine-tuned on an aggregation of 10 high-resourced languages You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. The training data for the 10 languages are from: The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/distilbert-base-multilingual-cased-ner-hrl ### Model URL : https://huggingface.co/Davlan/distilbert-base-multilingual-cased-ner-hrl ### Model Description : language: distilbert-base-multilingual-cased-ner-hrl is a Named Entity Recognition model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned Distiled BERT base model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a distilbert-base-multilingual-cased model that was fine-tuned on an aggregation of 10 high-resourced languages You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. The training data for the 10 languages are from: The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.
Davlan/m2m100_418M-eng-yor-mt
https://huggingface.co/Davlan/m2m100_418M-eng-yor-mt
language: m2m100_418M-eng-yor-mt is a machine translation model from English language to Yorùbá language based on a fine-tuned facebook/m2m100_418M model. It establishes a strong baseline for automatically translating texts from English to Yorùbá. Specifically, this model is a facebook/m2m100_418M model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k. This model is limited by its training dataset. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 corpus and Menyo-20k dataset This model was trained on NVIDIA V100 GPU Fine-tuning m2m100_418M achieves 13.39 BLEU on Menyo-20k test set while mt5-base achieves 9.82 By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/m2m100_418M-eng-yor-mt ### Model URL : https://huggingface.co/Davlan/m2m100_418M-eng-yor-mt ### Model Description : language: m2m100_418M-eng-yor-mt is a machine translation model from English language to Yorùbá language based on a fine-tuned facebook/m2m100_418M model. It establishes a strong baseline for automatically translating texts from English to Yorùbá. Specifically, this model is a facebook/m2m100_418M model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k. This model is limited by its training dataset. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 corpus and Menyo-20k dataset This model was trained on NVIDIA V100 GPU Fine-tuning m2m100_418M achieves 13.39 BLEU on Menyo-20k test set while mt5-base achieves 9.82 By David Adelani
Davlan/m2m100_418M-yor-eng-mt
https://huggingface.co/Davlan/m2m100_418M-yor-eng-mt
language: m2m100_418M-yor-eng-mt is a machine translation model from Yorùbá language to English language based on a fine-tuned facebook/m2m100_418M model. It establishes a strong baseline for automatically translating texts from Yorùbá to English. Specifically, this model is a facebook/m2m100_418M model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k. This model is limited by its training dataset. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 corpus and Menyo-20k dataset This model was trained on NVIDIA V100 GPU Fine-tuning m2m100_418M achieves 16.76 BLEU on Menyo-20k test set while mt5-base achieves 15.57 By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/m2m100_418M-yor-eng-mt ### Model URL : https://huggingface.co/Davlan/m2m100_418M-yor-eng-mt ### Model Description : language: m2m100_418M-yor-eng-mt is a machine translation model from Yorùbá language to English language based on a fine-tuned facebook/m2m100_418M model. It establishes a strong baseline for automatically translating texts from Yorùbá to English. Specifically, this model is a facebook/m2m100_418M model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k. This model is limited by its training dataset. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 corpus and Menyo-20k dataset This model was trained on NVIDIA V100 GPU Fine-tuning m2m100_418M achieves 16.76 BLEU on Menyo-20k test set while mt5-base achieves 15.57 By David Adelani
Davlan/mT5_base_yoruba_adr
https://huggingface.co/Davlan/mT5_base_yoruba_adr
language: yo datasets: mT5_base_yoruba_adr is a automatic diacritics restoration model for Yorùbá language based on a fine-tuned mT5-base model. It achieves the state-of-the-art performance for adding the correct diacritics or tonal marks to Yorùbá texts. Specifically, this model is a mT5_base model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k You can use this model with Transformers pipeline for ADR. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on on JW300 Yorùbá corpus and Menyo-20k dataset This model was trained on a single NVIDIA V100 GPU 64.63 BLEU on Global Voices test set 70.27 BLEU on Menyo-20k test set By Jesujoba Alabi and David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/mT5_base_yoruba_adr ### Model URL : https://huggingface.co/Davlan/mT5_base_yoruba_adr ### Model Description : language: yo datasets: mT5_base_yoruba_adr is a automatic diacritics restoration model for Yorùbá language based on a fine-tuned mT5-base model. It achieves the state-of-the-art performance for adding the correct diacritics or tonal marks to Yorùbá texts. Specifically, this model is a mT5_base model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k You can use this model with Transformers pipeline for ADR. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on on JW300 Yorùbá corpus and Menyo-20k dataset This model was trained on a single NVIDIA V100 GPU 64.63 BLEU on Global Voices test set 70.27 BLEU on Menyo-20k test set By Jesujoba Alabi and David Adelani
Davlan/mbart50-large-eng-yor-mt
https://huggingface.co/Davlan/mbart50-large-eng-yor-mt
language: mbart50-large-eng-yor-mt is a machine translation model from English language to Yorùbá language based on a fine-tuned facebook/mbart-large-50 model. It establishes a strong baseline for automatically translating texts from English to Yorùbá. Specifically, this model is a mbart-large-50 model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k. The model was trained using Swahili(sw_KE) as the language since the pre-trained model does not initially support Yorùbá. Thus, you need to use the sw_KE for language code when evaluating the model. This model is limited by its training dataset. This may not generalize well for all use cases in different domains. This model was fine-tuned on on JW300 corpus and Menyo-20k dataset This model was trained on NVIDIA V100 GPU Fine-tuning mbarr50-large achieves 13.39 BLEU on Menyo-20k test set while mt5-base achieves 9.82 By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/mbart50-large-eng-yor-mt ### Model URL : https://huggingface.co/Davlan/mbart50-large-eng-yor-mt ### Model Description : language: mbart50-large-eng-yor-mt is a machine translation model from English language to Yorùbá language based on a fine-tuned facebook/mbart-large-50 model. It establishes a strong baseline for automatically translating texts from English to Yorùbá. Specifically, this model is a mbart-large-50 model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k. The model was trained using Swahili(sw_KE) as the language since the pre-trained model does not initially support Yorùbá. Thus, you need to use the sw_KE for language code when evaluating the model. This model is limited by its training dataset. This may not generalize well for all use cases in different domains. This model was fine-tuned on on JW300 corpus and Menyo-20k dataset This model was trained on NVIDIA V100 GPU Fine-tuning mbarr50-large achieves 13.39 BLEU on Menyo-20k test set while mt5-base achieves 9.82 By David Adelani
Davlan/mbart50-large-yor-eng-mt
https://huggingface.co/Davlan/mbart50-large-yor-eng-mt
language: mbart50-large-yor-eng-mt is a machine translation model from Yorùbá language to English language based on a fine-tuned facebook/mbart-large-50 model. It establishes a strong baseline for automatically translating texts from Yorùbá to English. Specifically, this model is a mbart-large-50 model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k. The model was trained using Swahili(sw_KE) as the language since the pre-trained model does not initially support Yorùbá. Thus, you need to use the sw_KE for language code when evaluating the model. This model is limited by its training dataset. This may not generalize well for all use cases in different domains. This model was fine-tuned on on JW300 corpus and Menyo-20k dataset This model was trained on NVIDIA V100 GPU Fine-tuning mbart50-large achieves 15.88 BLEU on Menyo-20k test set while mt5-base achieves 15.57 By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/mbart50-large-yor-eng-mt ### Model URL : https://huggingface.co/Davlan/mbart50-large-yor-eng-mt ### Model Description : language: mbart50-large-yor-eng-mt is a machine translation model from Yorùbá language to English language based on a fine-tuned facebook/mbart-large-50 model. It establishes a strong baseline for automatically translating texts from Yorùbá to English. Specifically, this model is a mbart-large-50 model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k. The model was trained using Swahili(sw_KE) as the language since the pre-trained model does not initially support Yorùbá. Thus, you need to use the sw_KE for language code when evaluating the model. This model is limited by its training dataset. This may not generalize well for all use cases in different domains. This model was fine-tuned on on JW300 corpus and Menyo-20k dataset This model was trained on NVIDIA V100 GPU Fine-tuning mbart50-large achieves 15.88 BLEU on Menyo-20k test set while mt5-base achieves 15.57 By David Adelani
Davlan/mt5-small-en-pcm
https://huggingface.co/Davlan/mt5-small-en-pcm
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/mt5-small-en-pcm ### Model URL : https://huggingface.co/Davlan/mt5-small-en-pcm ### Model Description : No model card New: Create and edit this model card directly on the website!
Davlan/mt5-small-pcm-en
https://huggingface.co/Davlan/mt5-small-pcm-en
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/mt5-small-pcm-en ### Model URL : https://huggingface.co/Davlan/mt5-small-pcm-en ### Model Description : No model card New: Create and edit this model card directly on the website!
Davlan/mt5_base_eng_yor_mt
https://huggingface.co/Davlan/mt5_base_eng_yor_mt
language: mT5_base_yor_eng_mt is a machine translation model from English language to Yorùbá language based on a fine-tuned mT5-base model. It establishes a strong baseline for automatically translating texts from English to Yorùbá. Specifically, this model is a mT5_base model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k You can use this model with Transformers pipeline for MT. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on on JW300 corpus and Menyo-20k dataset This model was trained on a single NVIDIA V100 GPU 9.82 BLEU on Menyo-20k test set By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/mt5_base_eng_yor_mt ### Model URL : https://huggingface.co/Davlan/mt5_base_eng_yor_mt ### Model Description : language: mT5_base_yor_eng_mt is a machine translation model from English language to Yorùbá language based on a fine-tuned mT5-base model. It establishes a strong baseline for automatically translating texts from English to Yorùbá. Specifically, this model is a mT5_base model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k You can use this model with Transformers pipeline for MT. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on on JW300 corpus and Menyo-20k dataset This model was trained on a single NVIDIA V100 GPU 9.82 BLEU on Menyo-20k test set By David Adelani
Davlan/mt5_base_yor_eng_mt
https://huggingface.co/Davlan/mt5_base_yor_eng_mt
language: mT5_base_yor_eng_mt is a machine translation model from Yorùbá language to English language based on a fine-tuned mT5-base model. It establishes a strong baseline for automatically translating texts from Yorùbá to English. Specifically, this model is a mT5_base model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k You can use this model with Transformers pipeline for MT. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on on JW300 Yorùbá corpus and Menyo-20k dataset This model was trained on a single NVIDIA V100 GPU 15.57 BLEU on Menyo-20k test set By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/mt5_base_yor_eng_mt ### Model URL : https://huggingface.co/Davlan/mt5_base_yor_eng_mt ### Model Description : language: mT5_base_yor_eng_mt is a machine translation model from Yorùbá language to English language based on a fine-tuned mT5-base model. It establishes a strong baseline for automatically translating texts from Yorùbá to English. Specifically, this model is a mT5_base model that was fine-tuned on JW300 Yorùbá corpus and Menyo-20k You can use this model with Transformers pipeline for MT. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on on JW300 Yorùbá corpus and Menyo-20k dataset This model was trained on a single NVIDIA V100 GPU 15.57 BLEU on Menyo-20k test set By David Adelani
Davlan/naija-twitter-sentiment-afriberta-large
https://huggingface.co/Davlan/naija-twitter-sentiment-afriberta-large
language: naija-twitter-sentiment-afriberta-large is the first multilingual twitter sentiment classification model for four (4) Nigerian languages (Hausa, Igbo, Nigerian Pidgin, and Yorùbá) based on a fine-tuned castorini/afriberta_large large model.It achieves the state-of-the-art performance for the twitter sentiment classification task trained on the NaijaSenti corpus. The model has been trained to classify tweets into 3 sentiment classes: negative, neutral and positive Specifically, this model is a xlm-roberta-large model that was fine-tuned on an aggregation of 4 Nigerian language datasets obtained from NaijaSenti dataset. You can use this model with Transformers for Sentiment Classification. This model is limited by its training dataset and domain i.e Twitter. This may not generalize well for all use cases in different domains. This model was trained on a single Nvidia RTX 2080 GPU with recommended hyperparameters from the original NaijaSenti paper.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/naija-twitter-sentiment-afriberta-large ### Model URL : https://huggingface.co/Davlan/naija-twitter-sentiment-afriberta-large ### Model Description : language: naija-twitter-sentiment-afriberta-large is the first multilingual twitter sentiment classification model for four (4) Nigerian languages (Hausa, Igbo, Nigerian Pidgin, and Yorùbá) based on a fine-tuned castorini/afriberta_large large model.It achieves the state-of-the-art performance for the twitter sentiment classification task trained on the NaijaSenti corpus. The model has been trained to classify tweets into 3 sentiment classes: negative, neutral and positive Specifically, this model is a xlm-roberta-large model that was fine-tuned on an aggregation of 4 Nigerian language datasets obtained from NaijaSenti dataset. You can use this model with Transformers for Sentiment Classification. This model is limited by its training dataset and domain i.e Twitter. This may not generalize well for all use cases in different domains. This model was trained on a single Nvidia RTX 2080 GPU with recommended hyperparameters from the original NaijaSenti paper.
Davlan/xlm-roberta-base-finetuned-amharic
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-amharic
language: am datasets: xlm-roberta-base-finetuned-amharic is a Amharic RoBERTa model obtained by fine-tuning xlm-roberta-base model on Amharic language texts. It provides better performance than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Amharic corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Amharic CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-amharic ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-amharic ### Model Description : language: am datasets: xlm-roberta-base-finetuned-amharic is a Amharic RoBERTa model obtained by fine-tuning xlm-roberta-base model on Amharic language texts. It provides better performance than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Amharic corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Amharic CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/xlm-roberta-base-finetuned-chichewa
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-chichewa
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-chichewa ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-chichewa ### Model Description :
Davlan/xlm-roberta-base-finetuned-english
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-english
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-english ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-english ### Model Description :
Davlan/xlm-roberta-base-finetuned-hausa
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa
language: ha datasets: xlm-roberta-base-finetuned-hausa is a Hausa RoBERTa model obtained by fine-tuning xlm-roberta-base model on Hausa language texts. It provides better performance than the XLM-RoBERTa on text classification and named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Hausa corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Hausa CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-hausa ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa ### Model Description : language: ha datasets: xlm-roberta-base-finetuned-hausa is a Hausa RoBERTa model obtained by fine-tuning xlm-roberta-base model on Hausa language texts. It provides better performance than the XLM-RoBERTa on text classification and named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Hausa corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Hausa CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/xlm-roberta-base-finetuned-igbo
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo
language: ig datasets: xlm-roberta-base-finetuned-igbo is a Igbo RoBERTa model obtained by fine-tuning xlm-roberta-base model on Hausa language texts. It provides better performance than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Igbo corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 + OPUS CC-Align + IGBO NLP Corpus +Igbo CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-igbo ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo ### Model Description : language: ig datasets: xlm-roberta-base-finetuned-igbo is a Igbo RoBERTa model obtained by fine-tuning xlm-roberta-base model on Hausa language texts. It provides better performance than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Igbo corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 + OPUS CC-Align + IGBO NLP Corpus +Igbo CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/xlm-roberta-base-finetuned-kinyarwanda
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda
language: rw datasets: xlm-roberta-base-finetuned-kinyarwanda is a Kinyarwanda RoBERTa model obtained by fine-tuning xlm-roberta-base model on Kinyarwanda language texts. It provides better performance than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Kinyarwanda corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 + KIRNEWS + BBC Gahuza This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-kinyarwanda ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda ### Model Description : language: rw datasets: xlm-roberta-base-finetuned-kinyarwanda is a Kinyarwanda RoBERTa model obtained by fine-tuning xlm-roberta-base model on Kinyarwanda language texts. It provides better performance than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Kinyarwanda corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 + KIRNEWS + BBC Gahuza This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/xlm-roberta-base-finetuned-lingala
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-lingala
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-lingala ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-lingala ### Model Description :
Davlan/xlm-roberta-base-finetuned-luganda
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda
language: lg datasets: xlm-roberta-base-finetuned-luganda is a Luganda RoBERTa model obtained by fine-tuning xlm-roberta-base model on Luganda language texts. It provides better performance than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Luganda corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 + BUKKEDDE +Luganda CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-luganda ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda ### Model Description : language: lg datasets: xlm-roberta-base-finetuned-luganda is a Luganda RoBERTa model obtained by fine-tuning xlm-roberta-base model on Luganda language texts. It provides better performance than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Luganda corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 + BUKKEDDE +Luganda CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/xlm-roberta-base-finetuned-luo
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo
language: luo datasets: xlm-roberta-base-finetuned-luo is a Luo RoBERTa model obtained by fine-tuning xlm-roberta-base model on Luo language texts. It provides better performance than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Luo corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-luo ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo ### Model Description : language: luo datasets: xlm-roberta-base-finetuned-luo is a Luo RoBERTa model obtained by fine-tuning xlm-roberta-base model on Luo language texts. It provides better performance than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Luo corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/xlm-roberta-base-finetuned-naija
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija
language: pcm datasets: xlm-roberta-base-finetuned-naija is a Nigerian Pidgin RoBERTa model obtained by fine-tuning xlm-roberta-base model on Nigerian Pidgin language texts. It provides better performance than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Nigerian Pidgin corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 + BBC Pidgin This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-naija ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija ### Model Description : language: pcm datasets: xlm-roberta-base-finetuned-naija is a Nigerian Pidgin RoBERTa model obtained by fine-tuning xlm-roberta-base model on Nigerian Pidgin language texts. It provides better performance than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Nigerian Pidgin corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on JW300 + BBC Pidgin This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/xlm-roberta-base-finetuned-shona
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-shona
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-shona ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-shona ### Model Description :
Davlan/xlm-roberta-base-finetuned-somali
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-somali
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-somali ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-somali ### Model Description :
Davlan/xlm-roberta-base-finetuned-swahili
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili
language: sw datasets: xlm-roberta-base-finetuned-swahili is a Swahili RoBERTa model obtained by fine-tuning xlm-roberta-base model on Swahili language texts. It provides better performance than the XLM-RoBERTa on text classification and named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Swahili corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Swahili CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-swahili ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili ### Model Description : language: sw datasets: xlm-roberta-base-finetuned-swahili is a Swahili RoBERTa model obtained by fine-tuning xlm-roberta-base model on Swahili language texts. It provides better performance than the XLM-RoBERTa on text classification and named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Swahili corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Swahili CC-100 This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/xlm-roberta-base-finetuned-wolof
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof
language: wo datasets: xlm-roberta-base-finetuned-luganda is a Wolof RoBERTa model obtained by fine-tuning xlm-roberta-base model on Wolof language texts. It provides better performance than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Wolof corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Bible OT + OPUS + News Corpora (Lu Defu Waxu, Saabal, and Wolof Online) This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-wolof ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof ### Model Description : language: wo datasets: xlm-roberta-base-finetuned-luganda is a Wolof RoBERTa model obtained by fine-tuning xlm-roberta-base model on Wolof language texts. It provides better performance than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Wolof corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Bible OT + OPUS + News Corpora (Lu Defu Waxu, Saabal, and Wolof Online) This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/xlm-roberta-base-finetuned-xhosa
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-xhosa
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-xhosa ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-xhosa ### Model Description :
Davlan/xlm-roberta-base-finetuned-yoruba
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba
language: yo datasets: xlm-roberta-base-finetuned-yoruba is a Yoruba RoBERTa model obtained by fine-tuning xlm-roberta-base model on Yorùbá language texts. It provides better performance than the XLM-RoBERTa on text classification and named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Yorùbá corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Bible, JW300, Menyo-20k, Yoruba Embedding corpus and CC-Aligned, Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends. This model was trained on a single NVIDIA V100 GPU By David Adelani
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-yoruba ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba ### Model Description : language: yo datasets: xlm-roberta-base-finetuned-yoruba is a Yoruba RoBERTa model obtained by fine-tuning xlm-roberta-base model on Yorùbá language texts. It provides better performance than the XLM-RoBERTa on text classification and named entity recognition datasets. Specifically, this model is a xlm-roberta-base model that was fine-tuned on Yorùbá corpus. You can use this model with Transformers pipeline for masked token prediction. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on Bible, JW300, Menyo-20k, Yoruba Embedding corpus and CC-Aligned, Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends. This model was trained on a single NVIDIA V100 GPU By David Adelani
Davlan/xlm-roberta-base-finetuned-zulu
https://huggingface.co/Davlan/xlm-roberta-base-finetuned-zulu
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-finetuned-zulu ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-finetuned-zulu ### Model Description : No model card New: Create and edit this model card directly on the website!
Davlan/xlm-roberta-base-masakhaner
https://huggingface.co/Davlan/xlm-roberta-base-masakhaner
language: datasets: xlm-roberta-base-masakhaner is the first Named Entity Recognition model for 10 African languages (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) based on a fine-tuned XLM-RoBERTa large model. It achieves the state-of-the-art performance for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER). Specifically, this model is a xlm-roberta-large model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane MasakhaNER dataset. You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on 10 African NER datasets (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) Masakhane MasakhaNER dataset The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the original MasakhaNER paper which trained & evaluated the model on MasakhaNER corpus.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-masakhaner ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-masakhaner ### Model Description : language: datasets: xlm-roberta-base-masakhaner is the first Named Entity Recognition model for 10 African languages (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) based on a fine-tuned XLM-RoBERTa large model. It achieves the state-of-the-art performance for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER). Specifically, this model is a xlm-roberta-large model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane MasakhaNER dataset. You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on 10 African NER datasets (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) Masakhane MasakhaNER dataset The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the original MasakhaNER paper which trained & evaluated the model on MasakhaNER corpus.
Davlan/xlm-roberta-base-ner-hrl
https://huggingface.co/Davlan/xlm-roberta-base-ner-hrl
language: xlm-roberta-base-ner-hrl is a Named Entity Recognition model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned XLM-RoBERTa base model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a xlm-roberta-base model that was fine-tuned on an aggregation of 10 high-resourced languages You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. The training data for the 10 languages are from: The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-ner-hrl ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-ner-hrl ### Model Description : language: xlm-roberta-base-ner-hrl is a Named Entity Recognition model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned XLM-RoBERTa base model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a xlm-roberta-base model that was fine-tuned on an aggregation of 10 high-resourced languages You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. The training data for the 10 languages are from: The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.
Davlan/xlm-roberta-base-sadilar-ner
https://huggingface.co/Davlan/xlm-roberta-base-sadilar-ner
language: datasets: xlm-roberta-base-sadilar-ner is the first Named Entity Recognition model for 10 South African languages (Afrikaans, isiNdebele, isiXhosa, isiZulu, Sepedi, Sesotho, Setswana, siSwati, Tshivenda and Xitsonga) based on a fine-tuned XLM-RoBERTa large model. It achieves the state-of-the-art performance for the NER task. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a xlm-roberta-large model that was fine-tuned on an aggregation of South African languages datasets obtained from SADILAR dataset. You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on 10 African NER datasets (Afrikaans, isiNdebele, isiXhosa, isiZulu, Sepedi, Sesotho, Setswana, siSwati, Tshivenda and Xitsonga) SADILAR dataset The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-sadilar-ner ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-sadilar-ner ### Model Description : language: datasets: xlm-roberta-base-sadilar-ner is the first Named Entity Recognition model for 10 South African languages (Afrikaans, isiNdebele, isiXhosa, isiZulu, Sepedi, Sesotho, Setswana, siSwati, Tshivenda and Xitsonga) based on a fine-tuned XLM-RoBERTa large model. It achieves the state-of-the-art performance for the NER task. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a xlm-roberta-large model that was fine-tuned on an aggregation of South African languages datasets obtained from SADILAR dataset. You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on 10 African NER datasets (Afrikaans, isiNdebele, isiXhosa, isiZulu, Sepedi, Sesotho, Setswana, siSwati, Tshivenda and Xitsonga) SADILAR dataset The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Davlan/xlm-roberta-base-wikiann-ner
https://huggingface.co/Davlan/xlm-roberta-base-wikiann-ner
language: datasets: xlm-roberta-base-wikiann-ner is the first Named Entity Recognition model for 20 languages (Arabic, Assamese, Bengali, Catalan, English, Spanish, Basque, French, Gujarati, Hindi, Indonesia, Igbo, Marathi, Punjabi, Portugues and Swahili, Urdu, Vietnamese, Yoruba, Chinese) based on a fine-tuned XLM-RoBERTa large model. It achieves the state-of-the-art performance for the NER task. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a xlm-roberta-large model that was fine-tuned on an aggregation of languages datasets obtained from WikiANN dataset. You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on 20 NER datasets (Arabic, Assamese, Bengali, Catalan, English, Spanish, Basque, French, Gujarati, Hindi, Indonesia, Igbo, Marathi, Punjabi, Portugues and Swahili, Urdu, Vietnamese, Yoruba, Chinese)wikiann. The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-base-wikiann-ner ### Model URL : https://huggingface.co/Davlan/xlm-roberta-base-wikiann-ner ### Model Description : language: datasets: xlm-roberta-base-wikiann-ner is the first Named Entity Recognition model for 20 languages (Arabic, Assamese, Bengali, Catalan, English, Spanish, Basque, French, Gujarati, Hindi, Indonesia, Igbo, Marathi, Punjabi, Portugues and Swahili, Urdu, Vietnamese, Yoruba, Chinese) based on a fine-tuned XLM-RoBERTa large model. It achieves the state-of-the-art performance for the NER task. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a xlm-roberta-large model that was fine-tuned on an aggregation of languages datasets obtained from WikiANN dataset. You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on 20 NER datasets (Arabic, Assamese, Bengali, Catalan, English, Spanish, Basque, French, Gujarati, Hindi, Indonesia, Igbo, Marathi, Punjabi, Portugues and Swahili, Urdu, Vietnamese, Yoruba, Chinese)wikiann. The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Davlan/xlm-roberta-large-masakhaner
https://huggingface.co/Davlan/xlm-roberta-large-masakhaner
language: datasets: xlm-roberta-large-masakhaner is the first Named Entity Recognition model for 10 African languages (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) based on a fine-tuned XLM-RoBERTa large model. It achieves the state-of-the-art performance for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER). Specifically, this model is a xlm-roberta-large model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane MasakhaNER dataset. You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on 10 African NER datasets (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) Masakhane MasakhaNER dataset The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the original MasakhaNER paper which trained & evaluated the model on MasakhaNER corpus.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-large-masakhaner ### Model URL : https://huggingface.co/Davlan/xlm-roberta-large-masakhaner ### Model Description : language: datasets: xlm-roberta-large-masakhaner is the first Named Entity Recognition model for 10 African languages (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) based on a fine-tuned XLM-RoBERTa large model. It achieves the state-of-the-art performance for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER). Specifically, this model is a xlm-roberta-large model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane MasakhaNER dataset. You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. This model was fine-tuned on 10 African NER datasets (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) Masakhane MasakhaNER dataset The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the original MasakhaNER paper which trained & evaluated the model on MasakhaNER corpus.
Davlan/xlm-roberta-large-ner-hrl
https://huggingface.co/Davlan/xlm-roberta-large-ner-hrl
language: xlm-roberta-large-ner-hrl is a Named Entity Recognition model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned XLM-RoBERTa large model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a xlm-roberta-large model that was fine-tuned on an aggregation of 10 high-resourced languages You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. The training data for the 10 languages are from: The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Davlan/xlm-roberta-large-ner-hrl ### Model URL : https://huggingface.co/Davlan/xlm-roberta-large-ner-hrl ### Model Description : language: xlm-roberta-large-ner-hrl is a Named Entity Recognition model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned XLM-RoBERTa large model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER). Specifically, this model is a xlm-roberta-large model that was fine-tuned on an aggregation of 10 high-resourced languages You can use this model with Transformers pipeline for NER. This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. The training data for the 10 languages are from: The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.
Dawit/DialogGPT-small-ironman
https://huggingface.co/Dawit/DialogGPT-small-ironman
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dawit/DialogGPT-small-ironman ### Model URL : https://huggingface.co/Dawit/DialogGPT-small-ironman ### Model Description :
Dawn576/Dawn
https://huggingface.co/Dawn576/Dawn
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dawn576/Dawn ### Model URL : https://huggingface.co/Dawn576/Dawn ### Model Description : No model card New: Create and edit this model card directly on the website!
Daymarebait/Discord_BOT_RICK
https://huggingface.co/Daymarebait/Discord_BOT_RICK
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Daymarebait/Discord_BOT_RICK ### Model URL : https://huggingface.co/Daymarebait/Discord_BOT_RICK ### Model Description :
Dayout/test
https://huggingface.co/Dayout/test
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dayout/test ### Model URL : https://huggingface.co/Dayout/test ### Model Description : No model card New: Create and edit this model card directly on the website!
Dazai/Ko
https://huggingface.co/Dazai/Ko
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dazai/Ko ### Model URL : https://huggingface.co/Dazai/Ko ### Model Description : No model card New: Create and edit this model card directly on the website!
Dazai/Ok
https://huggingface.co/Dazai/Ok
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dazai/Ok ### Model URL : https://huggingface.co/Dazai/Ok ### Model Description : No model card New: Create and edit this model card directly on the website!
Dbluciferm3737/Idk
https://huggingface.co/Dbluciferm3737/Idk
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dbluciferm3737/Idk ### Model URL : https://huggingface.co/Dbluciferm3737/Idk ### Model Description : No model card New: Create and edit this model card directly on the website!
Dbluciferm3737/U
https://huggingface.co/Dbluciferm3737/U
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dbluciferm3737/U ### Model URL : https://huggingface.co/Dbluciferm3737/U ### Model Description : No model card New: Create and edit this model card directly on the website!
Ddarkros/Test
https://huggingface.co/Ddarkros/Test
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Ddarkros/Test ### Model URL : https://huggingface.co/Ddarkros/Test ### Model Description : No model card New: Create and edit this model card directly on the website!
DeBERTa/deberta-v2-xxlarge
https://huggingface.co/DeBERTa/deberta-v2-xxlarge
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DeBERTa/deberta-v2-xxlarge ### Model URL : https://huggingface.co/DeBERTa/deberta-v2-xxlarge ### Model Description : No model card New: Create and edit this model card directly on the website!
DeadBeast/emoBERTTamil
https://huggingface.co/DeadBeast/emoBERTTamil
This model is a fine-tuned version of bert-base-uncased on the tamilmixsentiment dataset. It achieves the following results on the evaluation set: The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DeadBeast/emoBERTTamil ### Model URL : https://huggingface.co/DeadBeast/emoBERTTamil ### Model Description : This model is a fine-tuned version of bert-base-uncased on the tamilmixsentiment dataset. It achieves the following results on the evaluation set: The following hyperparameters were used during training:
DeadBeast/korscm-mBERT
https://huggingface.co/DeadBeast/korscm-mBERT
This model is a fine-tune checkpoint of mBERT-base-cased over Hugging Face Kore_Scm dataset for Text classification. Task: binary-classification Click on Use in Transformers!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DeadBeast/korscm-mBERT ### Model URL : https://huggingface.co/DeadBeast/korscm-mBERT ### Model Description : This model is a fine-tune checkpoint of mBERT-base-cased over Hugging Face Kore_Scm dataset for Text classification. Task: binary-classification Click on Use in Transformers!
DeadBeast/marathi-roberta-base
https://huggingface.co/DeadBeast/marathi-roberta-base
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DeadBeast/marathi-roberta-base ### Model URL : https://huggingface.co/DeadBeast/marathi-roberta-base ### Model Description : No model card New: Create and edit this model card directly on the website!