Model Name
stringlengths 5
122
| URL
stringlengths 28
145
| Crawled Text
stringlengths 1
199k
⌀ | text
stringlengths 180
199k
|
---|---|---|---|
BumBelDumBel/ZORK_AI_FANTASY | https://huggingface.co/BumBelDumBel/ZORK_AI_FANTASY | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : BumBelDumBel/ZORK_AI_FANTASY
### Model URL : https://huggingface.co/BumBelDumBel/ZORK_AI_FANTASY
### Model Description : No model card New: Create and edit this model card directly on the website! |
BumBelDumBel/ZORK_AI_SCIFI | https://huggingface.co/BumBelDumBel/ZORK_AI_SCIFI | This model is a fine-tuned version of gpt2-medium on an unkown dataset. More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : BumBelDumBel/ZORK_AI_SCIFI
### Model URL : https://huggingface.co/BumBelDumBel/ZORK_AI_SCIFI
### Model Description : This model is a fine-tuned version of gpt2-medium on an unkown dataset. More information needed More information needed More information needed The following hyperparameters were used during training: |
BunakovD/sd | https://huggingface.co/BunakovD/sd | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : BunakovD/sd
### Model URL : https://huggingface.co/BunakovD/sd
### Model Description : No model card New: Create and edit this model card directly on the website! |
Buntan/BuntanAI | https://huggingface.co/Buntan/BuntanAI | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Buntan/BuntanAI
### Model URL : https://huggingface.co/Buntan/BuntanAI
### Model Description : No model card New: Create and edit this model card directly on the website! |
Buntan/bert-finetuned-ner | https://huggingface.co/Buntan/bert-finetuned-ner | This model is a fine-tuned version of bert-base-cased on the conll2003 dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Buntan/bert-finetuned-ner
### Model URL : https://huggingface.co/Buntan/bert-finetuned-ner
### Model Description : This model is a fine-tuned version of bert-base-cased on the conll2003 dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: |
Buntan/xlm-roberta-base-finetuned-marc-en | https://huggingface.co/Buntan/xlm-roberta-base-finetuned-marc-en | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Buntan/xlm-roberta-base-finetuned-marc-en
### Model URL : https://huggingface.co/Buntan/xlm-roberta-base-finetuned-marc-en
### Model Description : No model card New: Create and edit this model card directly on the website! |
Bwehfuk/Ron | https://huggingface.co/Bwehfuk/Ron | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Bwehfuk/Ron
### Model URL : https://huggingface.co/Bwehfuk/Ron
### Model Description : No model card New: Create and edit this model card directly on the website! |
CALM/CALM | https://huggingface.co/CALM/CALM | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CALM/CALM
### Model URL : https://huggingface.co/CALM/CALM
### Model Description : No model card New: Create and edit this model card directly on the website! |
CALM/backup | https://huggingface.co/CALM/backup | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CALM/backup
### Model URL : https://huggingface.co/CALM/backup
### Model Description : No model card New: Create and edit this model card directly on the website! |
CAMeL-Lab/bert-base-arabic-camelbert-ca-ner | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca-ner | CAMeLBERT-CA NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Classical Arabic (CA) model.
For the fine-tuning, we used the ANERcorp dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." You can use the CAMeLBERT-CA NER model directly as part of our CAMeL Tools NER component (recommended) or as part of the transformers pipeline. To use the model with the CAMeL Tools NER component: You can also use the NER model directly with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-ca-ner
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca-ner
### Model Description : CAMeLBERT-CA NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Classical Arabic (CA) model.
For the fine-tuning, we used the ANERcorp dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." You can use the CAMeLBERT-CA NER model directly as part of our CAMeL Tools NER component (recommended) or as part of the transformers pipeline. To use the model with the CAMeL Tools NER component: You can also use the NER model directly with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry | CAMeLBERT-CA Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Classical Arabic (CA) model.
For the fine-tuning, we used the APCD dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-CA Poetry Classification model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry
### Model Description : CAMeLBERT-CA Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Classical Arabic (CA) model.
For the fine-tuning, we used the APCD dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-CA Poetry Classification model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy | CAMeLBERT-CA POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-CA model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-CA POS-EGY model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy
### Model Description : CAMeLBERT-CA POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-CA model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-CA POS-EGY model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf | CAMeLBERT-CA POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-CA model.
For the fine-tuning, we used the Gumar dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."
Our fine-tuning code can be found here. You can use the CAMeLBERT-CA POS-GLF model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf
### Model Description : CAMeLBERT-CA POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-CA model.
For the fine-tuning, we used the Gumar dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."
Our fine-tuning code can be found here. You can use the CAMeLBERT-CA POS-GLF model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa | CAMeLBERT-CA POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-CA model.
For the fine-tuning, we used the PATB dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-CA POS-MSA model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa
### Model Description : CAMeLBERT-CA POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-CA model.
For the fine-tuning, we used the PATB dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-CA POS-MSA model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment | CAMeLBERT-CA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Classical Arabic (CA) model.
For the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." You can use the CAMeLBERT-CA SA model directly as part of our CAMeL Tools SA component (recommended) or as part of the transformers pipeline. To use the model with the CAMeL Tools SA component: You can also use the SA model directly with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment
### Model Description : CAMeLBERT-CA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Classical Arabic (CA) model.
For the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." You can use the CAMeLBERT-CA SA model directly as part of our CAMeL Tools SA component (recommended) or as part of the transformers pipeline. To use the model with the CAMeL Tools SA component: You can also use the SA model directly with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-ca | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca | CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." This model card describes CAMeLBERT-CA (bert-base-arabic-camelbert-ca), a model pre-trained on the CA (classical Arabic) dataset. You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here. You can use this model directly with a pipeline for masked language modeling: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. [1]: Variant-wise-average refers to average over a group of tasks in the same language variant. This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-ca
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca
### Model Description : CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." This model card describes CAMeLBERT-CA (bert-base-arabic-camelbert-ca), a model pre-trained on the CA (classical Arabic) dataset. You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here. You can use this model directly with a pipeline for masked language modeling: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. [1]: Variant-wise-average refers to average over a group of tasks in the same language variant. This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). |
CAMeL-Lab/bert-base-arabic-camelbert-da-ner | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da-ner | CAMeLBERT-DA NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Dialectal Arabic (DA) model.
For the fine-tuning, we used the ANERcorp dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." You can use the CAMeLBERT-DA NER model directly as part of our CAMeL Tools NER component (recommended) or as part of the transformers pipeline. To use the model with the CAMeL Tools NER component: You can also use the NER model directly with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-da-ner
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da-ner
### Model Description : CAMeLBERT-DA NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Dialectal Arabic (DA) model.
For the fine-tuning, we used the ANERcorp dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." You can use the CAMeLBERT-DA NER model directly as part of our CAMeL Tools NER component (recommended) or as part of the transformers pipeline. To use the model with the CAMeL Tools NER component: You can also use the NER model directly with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-da-poetry | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da-poetry | CAMeLBERT-DA Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Dialectal Arabic (DA) model.
For the fine-tuning, we used the APCD dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-DA Poetry Classification model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-da-poetry
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da-poetry
### Model Description : CAMeLBERT-DA Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Dialectal Arabic (DA) model.
For the fine-tuning, we used the APCD dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-DA Poetry Classification model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy | CAMeLBERT-DA POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-DA model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-DA POS-EGY model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy
### Model Description : CAMeLBERT-DA POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-DA model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-DA POS-EGY model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf | CAMeLBERT-DA POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-DA model.
For the fine-tuning, we used the Gumar dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."
Our fine-tuning code can be found here. You can use the CAMeLBERT-DA POS-GLF model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf
### Model Description : CAMeLBERT-DA POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-DA model.
For the fine-tuning, we used the Gumar dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."
Our fine-tuning code can be found here. You can use the CAMeLBERT-DA POS-GLF model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa | CAMeLBERT-DA POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-DA model.
For the fine-tuning, we used the PATB dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-DA POS-MSA model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa
### Model Description : CAMeLBERT-DA POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-DA model.
For the fine-tuning, we used the PATB dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-DA POS-MSA model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment | CAMeLBERT-DA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Dialectal Arabic (DA) model.
For the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." You can use the CAMeLBERT-DA SA model directly as part of our CAMeL Tools SA component (recommended) or as part of the transformers pipeline. To use the model with the CAMeL Tools SA component: You can also use the SA model directly with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment
### Model Description : CAMeLBERT-DA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Dialectal Arabic (DA) model.
For the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." You can use the CAMeLBERT-DA SA model directly as part of our CAMeL Tools SA component (recommended) or as part of the transformers pipeline. To use the model with the CAMeL Tools SA component: You can also use the SA model directly with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-da | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da | CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." This model card describes CAMeLBERT-DA (bert-base-arabic-camelbert-da), a model pre-trained on the DA (dialectal Arabic) dataset. You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here. You can use this model directly with a pipeline for masked language modeling: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. [1]: Variant-wise-average refers to average over a group of tasks in the same language variant. This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-da
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da
### Model Description : CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." This model card describes CAMeLBERT-DA (bert-base-arabic-camelbert-da), a model pre-trained on the DA (dialectal Arabic) dataset. You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here. You can use this model directly with a pipeline for masked language modeling: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. [1]: Variant-wise-average refers to average over a group of tasks in the same language variant. This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). |
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26 | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26 | CAMeLBERT-Mix DID Madar Corpus26 Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the MADAR Corpus 26 dataset, which includes 26 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-Mix DID Madar Corpus26 model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26
### Model Description : CAMeLBERT-Mix DID Madar Corpus26 Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the MADAR Corpus 26 dataset, which includes 26 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-Mix DID Madar Corpus26 model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6 | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6 | CAMeLBERT-Mix DID MADAR Corpus6 Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the MADAR Corpus 6 dataset, which includes 6 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-Mix DID MADAR Corpus6 model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6
### Model Description : CAMeLBERT-Mix DID MADAR Corpus6 Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the MADAR Corpus 6 dataset, which includes 6 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-Mix DID MADAR Corpus6 model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models |
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi | CAMeLBERT-Mix DID NADI Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the NADI Coountry-level dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-Mix DID NADI model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi
### Model Description : CAMeLBERT-Mix DID NADI Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the NADI Coountry-level dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-Mix DID NADI model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-mix-ner | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-ner | CAMeLBERT-Mix NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Mix model.
For the fine-tuning, we used the ANERcorp dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.
" Our fine-tuning code can be found here. You can use the CAMeLBERT-Mix NER model directly as part of our CAMeL Tools NER component (recommended) or as part of the transformers pipeline. To use the model with the CAMeL Tools NER component: You can also use the NER model directly with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-mix-ner
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-ner
### Model Description : CAMeLBERT-Mix NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Mix model.
For the fine-tuning, we used the ANERcorp dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.
" Our fine-tuning code can be found here. You can use the CAMeLBERT-Mix NER model directly as part of our CAMeL Tools NER component (recommended) or as part of the transformers pipeline. To use the model with the CAMeL Tools NER component: You can also use the NER model directly with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry | CAMeLBERT-Mix Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Mix model.
For the fine-tuning, we used the APCD dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-Mix Poetry Classification model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry
### Model Description : CAMeLBERT-Mix Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Mix model.
For the fine-tuning, we used the APCD dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-Mix Poetry Classification model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy | CAMeLBERT-Mix POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-Mix POS-EGY model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy
### Model Description : CAMeLBERT-Mix POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-Mix POS-EGY model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf | CAMeLBERT-Mix POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the Gumar dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-Mix POS-GLF model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf
### Model Description : CAMeLBERT-Mix POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the Gumar dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-Mix POS-GLF model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa | CAMeLBERT-Mix POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the PATB dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-Mix POS-MSA model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa
### Model Description : CAMeLBERT-Mix POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-Mix model.
For the fine-tuning, we used the PATB dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-Mix POS-MSA model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment | CAMeLBERT Mix SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Mix model.
For the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT Mix SA model directly as part of our CAMeL Tools SA component (recommended) or as part of the transformers pipeline. To use the model with the CAMeL Tools SA component: You can also use the SA model directly with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment
### Model Description : CAMeLBERT Mix SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Mix model.
For the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT Mix SA model directly as part of our CAMeL Tools SA component (recommended) or as part of the transformers pipeline. To use the model with the CAMeL Tools SA component: You can also use the SA model directly with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-mix | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix | CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." This model card describes CAMeLBERT-Mix (bert-base-arabic-camelbert-mix), a model pre-trained on a mixture of these variants: MSA, DA, and CA. You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here. You can use this model directly with a pipeline for masked language modeling: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. [1]: Variant-wise-average refers to average over a group of tasks in the same language variant. This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-mix
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix
### Model Description : CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." This model card describes CAMeLBERT-Mix (bert-base-arabic-camelbert-mix), a model pre-trained on a mixture of these variants: MSA, DA, and CA. You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here. You can use this model directly with a pipeline for masked language modeling: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. [1]: Variant-wise-average refers to average over a group of tasks in the same language variant. This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). |
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5 | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5 | CAMeLBERT-MSA DID MADAR Twitter-5 Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-MSA model.
For the fine-tuning, we used the MADAR Twitter-5 dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-MSA DID MADAR Twitter-5 model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5
### Model Description : CAMeLBERT-MSA DID MADAR Twitter-5 Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT-MSA model.
For the fine-tuning, we used the MADAR Twitter-5 dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-MSA DID MADAR Twitter-5 model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi | CAMeLBERT-MSA DID NADI Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.
For the fine-tuning, we used the NADI Coountry-level dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-MSA DID NADI model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi
### Model Description : CAMeLBERT-MSA DID NADI Model is a dialect identification (DID) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.
For the fine-tuning, we used the NADI Coountry-level dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-MSA DID NADI model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth | CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." This model card describes CAMeLBERT-MSA-eighth (bert-base-arabic-camelbert-msa-eighth), a model pre-trained on an eighth of the full MSA dataset. You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here. You can use this model directly with a pipeline for masked language modeling: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. [1]: Variant-wise-average refers to average over a group of tasks in the same language variant. This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth
### Model Description : CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." This model card describes CAMeLBERT-MSA-eighth (bert-base-arabic-camelbert-msa-eighth), a model pre-trained on an eighth of the full MSA dataset. You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here. You can use this model directly with a pipeline for masked language modeling: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. [1]: Variant-wise-average refers to average over a group of tasks in the same language variant. This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). |
CAMeL-Lab/bert-base-arabic-camelbert-msa-half | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-half | CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." This model card describes CAMeLBERT-MSA-half (bert-base-arabic-camelbert-msa-half), a model pre-trained on a half of the full MSA dataset. You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here. You can use this model directly with a pipeline for masked language modeling: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. [1]: Variant-wise-average refers to average over a group of tasks in the same language variant. This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-msa-half
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-half
### Model Description : CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." This model card describes CAMeLBERT-MSA-half (bert-base-arabic-camelbert-msa-half), a model pre-trained on a half of the full MSA dataset. You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here. You can use this model directly with a pipeline for masked language modeling: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. [1]: Variant-wise-average refers to average over a group of tasks in the same language variant. This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). |
CAMeL-Lab/bert-base-arabic-camelbert-msa-ner | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-ner | CAMeLBERT MSA NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.
For the fine-tuning, we used the ANERcorp dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.
" Our fine-tuning code can be found here. You can use the CAMeLBERT MSA NER model directly as part of our CAMeL Tools NER component (recommended) or as part of the transformers pipeline. To use the model with the CAMeL Tools NER component: You can also use the NER model directly with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-msa-ner
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-ner
### Model Description : CAMeLBERT MSA NER Model is a Named Entity Recognition (NER) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.
For the fine-tuning, we used the ANERcorp dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models.
" Our fine-tuning code can be found here. You can use the CAMeLBERT MSA NER model directly as part of our CAMeL Tools NER component (recommended) or as part of the transformers pipeline. To use the model with the CAMeL Tools NER component: You can also use the NER model directly with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry | CAMeLBERT-MSA Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.
For the fine-tuning, we used the APCD dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-MSA Poetry Classification model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry
### Model Description : CAMeLBERT-MSA Poetry Classification Model is a poetry classification model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.
For the fine-tuning, we used the APCD dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-MSA Poetry Classification model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-egy | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-egy | CAMeLBERT-MSA POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-MSA model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-MSA POS-EGY model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-egy
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-egy
### Model Description : CAMeLBERT-MSA POS-EGY Model is a Egyptian Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-MSA model.
For the fine-tuning, we used the ARZTB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-MSA POS-EGY model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf | CAMeLBERT-MSA POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-MSA model.
For the fine-tuning, we used the Gumar dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."
Our fine-tuning code can be found here. You can use the CAMeLBERT-MSA POS-GLF model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf
### Model Description : CAMeLBERT-MSA POS-GLF Model is a Gulf Arabic POS tagging model that was built by fine-tuning the CAMeLBERT-MSA model.
For the fine-tuning, we used the Gumar dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."
Our fine-tuning code can be found here. You can use the CAMeLBERT-MSA POS-GLF model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa | CAMeLBERT-MSA POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-MSA model.
For the fine-tuning, we used the PATB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-MSA POS-MSA model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa
### Model Description : CAMeLBERT-MSA POS-MSA Model is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the CAMeLBERT-MSA model.
For the fine-tuning, we used the PATB dataset .
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT-MSA POS-MSA model as part of the transformers pipeline.
This model will also be available in CAMeL Tools soon. To use the model with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter | CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." This model card describes CAMeLBERT-MSA-quarter (bert-base-arabic-camelbert-msa-quarter), a model pre-trained on a quarter of the full MSA dataset. You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here. You can use this model directly with a pipeline for masked language modeling: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. [1]: Variant-wise-average refers to average over a group of tasks in the same language variant. This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter
### Model Description : CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." This model card describes CAMeLBERT-MSA-quarter (bert-base-arabic-camelbert-msa-quarter), a model pre-trained on a quarter of the full MSA dataset. You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here. You can use this model directly with a pipeline for masked language modeling: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. [1]: Variant-wise-average refers to average over a group of tasks in the same language variant. This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). |
CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment | CAMeLBERT MSA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.
For the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT MSA SA model directly as part of our CAMeL Tools SA component (recommended) or as part of the transformers pipeline. To use the model with the CAMeL Tools SA component: You can also use the SA model directly with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment
### Model Description : CAMeLBERT MSA SA Model is a Sentiment Analysis (SA) model that was built by fine-tuning the CAMeLBERT Modern Standard Arabic (MSA) model.
For the fine-tuning, we used the ASTD, ArSAS, and SemEval datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." Our fine-tuning code can be found here. You can use the CAMeLBERT MSA SA model directly as part of our CAMeL Tools SA component (recommended) or as part of the transformers pipeline. To use the model with the CAMeL Tools SA component: You can also use the SA model directly with a transformers pipeline: Note: to download our models, you would need transformers>=3.5.0.
Otherwise, you could download the models manually. |
CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth | CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." This model card describes CAMeLBERT-MSA-sixteenth (bert-base-arabic-camelbert-msa-sixteenth), a model pre-trained on a sixteenth of the full MSA dataset. You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here. You can use this model directly with a pipeline for masked language modeling: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. [1]: Variant-wise-average refers to average over a group of tasks in the same language variant. This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth
### Model Description : CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." This model card describes CAMeLBERT-MSA-sixteenth (bert-base-arabic-camelbert-msa-sixteenth), a model pre-trained on a sixteenth of the full MSA dataset. You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here. You can use this model directly with a pipeline for masked language modeling: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. [1]: Variant-wise-average refers to average over a group of tasks in the same language variant. This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). |
CAMeL-Lab/bert-base-arabic-camelbert-msa | https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa | CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." This model card describes CAMeLBERT-MSA (bert-base-arabic-camelbert-msa), a model pre-trained on the entire MSA dataset. You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here. You can use this model directly with a pipeline for masked language modeling: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. [1]: Variant-wise-average refers to average over a group of tasks in the same language variant. This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAMeL-Lab/bert-base-arabic-camelbert-msa
### Model URL : https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa
### Model Description : CAMeLBERT is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
We release pre-trained language models for Modern Standard Arabic (MSA), dialectal Arabic (DA), and classical Arabic (CA), in addition to a model pre-trained on a mix of the three.
We also provide additional models that are pre-trained on a scaled-down set of the MSA variant (half, quarter, eighth, and sixteenth).
The details are described in the paper "The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models." This model card describes CAMeLBERT-MSA (bert-base-arabic-camelbert-msa), a model pre-trained on the entire MSA dataset. You can use the released model for either masked language modeling or next sentence prediction.
However, it is mostly intended to be fine-tuned on an NLP task, such as NER, POS tagging, sentiment analysis, dialect identification, and poetry classification.
We release our fine-tuninig code here. You can use this model directly with a pipeline for masked language modeling: Note: to download our models, you would need transformers>=3.5.0. Otherwise, you could download the models manually. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: We use the original implementation released by Google for pre-training.
We follow the original English BERT model's hyperparameters for pre-training, unless otherwise specified. [1]: Variant-wise-average refers to average over a group of tasks in the same language variant. This research was supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC). |
CAUKiel/JavaBERT-uncased | https://huggingface.co/CAUKiel/JavaBERT-uncased | A BERT-like model pretrained on Java software code. The model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A bert-base-uncased tokenizer is used by this model. A MLM (Masked Language Model) objective was used to train this model. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAUKiel/JavaBERT-uncased
### Model URL : https://huggingface.co/CAUKiel/JavaBERT-uncased
### Model Description : A BERT-like model pretrained on Java software code. The model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A bert-base-uncased tokenizer is used by this model. A MLM (Masked Language Model) objective was used to train this model. |
CAUKiel/JavaBERT | https://huggingface.co/CAUKiel/JavaBERT | A BERT-like model pretrained on Java software code. A BERT-like model pretrained on Java software code. Fill-Mask More information needed. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
{ see paper= word something) The model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A bert-base-cased tokenizer is used by this model. A MLM (Masked Language Model) objective was used to train this model. More information needed. More information needed. More information needed. More information needed. More information needed. More information needed. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). More information needed. More information needed. More information needed. More information needed. BibTeX: More information needed. APA: More information needed. More information needed. More information needed. Christian-Albrechts-University of Kiel (CAUKiel) in collaboration with Ezi Ozoani and the team at Hugging Face More information needed. Use the code below to get started with the model. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CAUKiel/JavaBERT
### Model URL : https://huggingface.co/CAUKiel/JavaBERT
### Model Description : A BERT-like model pretrained on Java software code. A BERT-like model pretrained on Java software code. Fill-Mask More information needed. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
{ see paper= word something) The model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A bert-base-cased tokenizer is used by this model. A MLM (Masked Language Model) objective was used to train this model. More information needed. More information needed. More information needed. More information needed. More information needed. More information needed. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). More information needed. More information needed. More information needed. More information needed. BibTeX: More information needed. APA: More information needed. More information needed. More information needed. Christian-Albrechts-University of Kiel (CAUKiel) in collaboration with Ezi Ozoani and the team at Hugging Face More information needed. Use the code below to get started with the model. |
CBreit00/DialoGPT_small_Rick | https://huggingface.co/CBreit00/DialoGPT_small_Rick | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CBreit00/DialoGPT_small_Rick
### Model URL : https://huggingface.co/CBreit00/DialoGPT_small_Rick
### Model Description : No model card New: Create and edit this model card directly on the website! |
CL/safe-math-bot | https://huggingface.co/CL/safe-math-bot | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CL/safe-math-bot
### Model URL : https://huggingface.co/CL/safe-math-bot
### Model Description : No model card New: Create and edit this model card directly on the website! |
CLAck/en-km | https://huggingface.co/CLAck/en-km | This model translate from English to Khmer.
It is the pure fine-tuned version of MarianMT model en-zh.
This is the result after 30 epochs of pure fine-tuning of khmer language. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLAck/en-km
### Model URL : https://huggingface.co/CLAck/en-km
### Model Description : This model translate from English to Khmer.
It is the pure fine-tuned version of MarianMT model en-zh.
This is the result after 30 epochs of pure fine-tuning of khmer language. |
CLAck/en-vi | https://huggingface.co/CLAck/en-vi | This is a finetuning of a MarianMT pretrained on English-Chinese. The target language pair is English-Vietnamese.
The first phase of training (mixed) is performed on a dataset containing both English-Chinese and English-Vietnamese sentences.
The second phase of training (pure) is performed on a dataset containing only English-Vietnamese sentences. MIXED PURE | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLAck/en-vi
### Model URL : https://huggingface.co/CLAck/en-vi
### Model Description : This is a finetuning of a MarianMT pretrained on English-Chinese. The target language pair is English-Vietnamese.
The first phase of training (mixed) is performed on a dataset containing both English-Chinese and English-Vietnamese sentences.
The second phase of training (pure) is performed on a dataset containing only English-Vietnamese sentences. MIXED PURE |
CLAck/indo-mixed | https://huggingface.co/CLAck/indo-mixed | This model is pretrained on Chinese and Indonesian languages, and fine-tuned on Indonesian language. MIXED FINETUNING | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLAck/indo-mixed
### Model URL : https://huggingface.co/CLAck/indo-mixed
### Model Description : This model is pretrained on Chinese and Indonesian languages, and fine-tuned on Indonesian language. MIXED FINETUNING |
CLAck/indo-pure | https://huggingface.co/CLAck/indo-pure | Pure fine-tuning version of MarianMT en-zh on Indonesian Language | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLAck/indo-pure
### Model URL : https://huggingface.co/CLAck/indo-pure
### Model Description : Pure fine-tuning version of MarianMT en-zh on Indonesian Language |
CLAck/vi-en | https://huggingface.co/CLAck/vi-en | This is a finetuning of a MarianMT pretrained on Chinese-English. The target language pair is Vietnamese-English. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLAck/vi-en
### Model URL : https://huggingface.co/CLAck/vi-en
### Model Description : This is a finetuning of a MarianMT pretrained on Chinese-English. The target language pair is Vietnamese-English. |
CLEE/CLEE | https://huggingface.co/CLEE/CLEE | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLEE/CLEE
### Model URL : https://huggingface.co/CLEE/CLEE
### Model Description : No model card New: Create and edit this model card directly on the website! |
CLS/WubiBERT_models | https://huggingface.co/CLS/WubiBERT_models | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLS/WubiBERT_models
### Model URL : https://huggingface.co/CLS/WubiBERT_models
### Model Description : No model card New: Create and edit this model card directly on the website! |
CLTL/MedRoBERTa.nl | https://huggingface.co/CLTL/MedRoBERTa.nl | This model is a RoBERTa-based model pre-trained from scratch on Dutch hospital notes sourced from Electronic Health Records. The model is not fine-tuned. All code used for the creation of MedRoBERTa.nl can be found at https://github.com/cltl-students/verkijk_stella_rma_thesis_dutch_medical_language_model. The model can be fine-tuned on any type of task. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch. The model was trained on nearly 10 million hospital notes from the Amsterdam University Medical Centres. The training data was anonymized before starting the pre-training procedure. By anonymizing the training data we made sure the model did not learn any representative associations linked to names. Apart from the training data, the model's vocabulary was also anonymized. This ensures that the model can not predict any names in the generative fill-mask task. Stella Verkijk, Piek Vossen Paper: Verkijk, S. & Vossen, P. (2022) MedRoBERTa.nl: A Language Model for Dutch Electronic Health Records. Computational Linguistics in the Netherlands Journal, 11. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLTL/MedRoBERTa.nl
### Model URL : https://huggingface.co/CLTL/MedRoBERTa.nl
### Model Description : This model is a RoBERTa-based model pre-trained from scratch on Dutch hospital notes sourced from Electronic Health Records. The model is not fine-tuned. All code used for the creation of MedRoBERTa.nl can be found at https://github.com/cltl-students/verkijk_stella_rma_thesis_dutch_medical_language_model. The model can be fine-tuned on any type of task. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch. The model was trained on nearly 10 million hospital notes from the Amsterdam University Medical Centres. The training data was anonymized before starting the pre-training procedure. By anonymizing the training data we made sure the model did not learn any representative associations linked to names. Apart from the training data, the model's vocabulary was also anonymized. This ensures that the model can not predict any names in the generative fill-mask task. Stella Verkijk, Piek Vossen Paper: Verkijk, S. & Vossen, P. (2022) MedRoBERTa.nl: A Language Model for Dutch Electronic Health Records. Computational Linguistics in the Netherlands Journal, 11. |
CLTL/gm-ner-xlmrbase | https://huggingface.co/CLTL/gm-ner-xlmrbase | This is a fine-tuned NER model for early-modern Dutch United East India Company (VOC) letters based on XLM-R_base (Conneau et al., 2020). The model identifies locations, persons, organisations, but also ships as well as derived forms of locations and religions. This model was fine-tuned (trained, validated and tested) on a single source of data, the General Letters (Generale Missiven). These letters span a large variety of Dutch, as they cover the largest part of the 17th and 18th centuries, and have been extended with editorial notes between 1960 and 2017. As the model was only fine-tuned on this data however, it may perform less well on other texts from the same period. The model can run on raw text through the token-classification pipeline: This outputs a list of entities with their character offsets in the input text: The model was fine-tuned on the General Letters GM-NER dataset, with the following tagset: The base text for this dataset is OCR text that has been partially corrected. The text is clean overall but errors remain. The model was fine-tuned with xlm-roberta-base, using this script. Non-default training parameters are: The model and fine-tuning data presented here were developed as part of: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLTL/gm-ner-xlmrbase
### Model URL : https://huggingface.co/CLTL/gm-ner-xlmrbase
### Model Description : This is a fine-tuned NER model for early-modern Dutch United East India Company (VOC) letters based on XLM-R_base (Conneau et al., 2020). The model identifies locations, persons, organisations, but also ships as well as derived forms of locations and religions. This model was fine-tuned (trained, validated and tested) on a single source of data, the General Letters (Generale Missiven). These letters span a large variety of Dutch, as they cover the largest part of the 17th and 18th centuries, and have been extended with editorial notes between 1960 and 2017. As the model was only fine-tuned on this data however, it may perform less well on other texts from the same period. The model can run on raw text through the token-classification pipeline: This outputs a list of entities with their character offsets in the input text: The model was fine-tuned on the General Letters GM-NER dataset, with the following tagset: The base text for this dataset is OCR text that has been partially corrected. The text is clean overall but errors remain. The model was fine-tuned with xlm-roberta-base, using this script. Non-default training parameters are: The model and fine-tuning data presented here were developed as part of: |
CLTL/icf-domains | https://huggingface.co/CLTL/icf-domains | A fine-tuned multi-label classification model that detects 9 WHO-ICF domains in clinical text in Dutch. The model is based on a pre-trained Dutch medical language model (link to be added), a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. The model can detect 9 domains, which were chosen due to their relevance to recovery from COVID-19: To generate predictions with the model, use the Simple Transformers library: The predictions look like this: The indices of the multi-label stand for: In other words, the above prediction corresponds to assigning the labels ADM, FAC and INS to the example sentence. The raw outputs look like this: For this model, the threshold at which the prediction for a label flips from 0 to 1 is 0.5. The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLTL/icf-domains
### Model URL : https://huggingface.co/CLTL/icf-domains
### Model Description : A fine-tuned multi-label classification model that detects 9 WHO-ICF domains in clinical text in Dutch. The model is based on a pre-trained Dutch medical language model (link to be added), a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. The model can detect 9 domains, which were chosen due to their relevance to recovery from COVID-19: To generate predictions with the model, use the Simple Transformers library: The predictions look like this: The indices of the multi-label stand for: In other words, the above prediction corresponds to assigning the labels ADM, FAC and INS to the example sentence. The raw outputs look like this: For this model, the threshold at which the prediction for a label flips from 0 to 1 is 0.5. The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD |
CLTL/icf-levels-adm | https://huggingface.co/CLTL/icf-levels-adm | A fine-tuned regression model that assigns a functioning level to Dutch sentences describing respiration functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about respiration functions in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLTL/icf-levels-adm
### Model URL : https://huggingface.co/CLTL/icf-levels-adm
### Model Description : A fine-tuned regression model that assigns a functioning level to Dutch sentences describing respiration functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about respiration functions in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD |
CLTL/icf-levels-att | https://huggingface.co/CLTL/icf-levels-att | A fine-tuned regression model that assigns a functioning level to Dutch sentences describing attention functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about attention functions in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLTL/icf-levels-att
### Model URL : https://huggingface.co/CLTL/icf-levels-att
### Model Description : A fine-tuned regression model that assigns a functioning level to Dutch sentences describing attention functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about attention functions in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD |
CLTL/icf-levels-ber | https://huggingface.co/CLTL/icf-levels-ber | A fine-tuned regression model that assigns a functioning level to Dutch sentences describing work and employment functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about work and employment functions in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLTL/icf-levels-ber
### Model URL : https://huggingface.co/CLTL/icf-levels-ber
### Model Description : A fine-tuned regression model that assigns a functioning level to Dutch sentences describing work and employment functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about work and employment functions in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD |
CLTL/icf-levels-enr | https://huggingface.co/CLTL/icf-levels-enr | A fine-tuned regression model that assigns a functioning level to Dutch sentences describing energy level. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about energy level in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLTL/icf-levels-enr
### Model URL : https://huggingface.co/CLTL/icf-levels-enr
### Model Description : A fine-tuned regression model that assigns a functioning level to Dutch sentences describing energy level. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about energy level in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD |
CLTL/icf-levels-etn | https://huggingface.co/CLTL/icf-levels-etn | A fine-tuned regression model that assigns a functioning level to Dutch sentences describing eating functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about eating functions in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLTL/icf-levels-etn
### Model URL : https://huggingface.co/CLTL/icf-levels-etn
### Model Description : A fine-tuned regression model that assigns a functioning level to Dutch sentences describing eating functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about eating functions in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD |
CLTL/icf-levels-fac | https://huggingface.co/CLTL/icf-levels-fac | A fine-tuned regression model that assigns a functioning level to Dutch sentences describing walking functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about walking functions in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLTL/icf-levels-fac
### Model URL : https://huggingface.co/CLTL/icf-levels-fac
### Model Description : A fine-tuned regression model that assigns a functioning level to Dutch sentences describing walking functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about walking functions in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD |
CLTL/icf-levels-ins | https://huggingface.co/CLTL/icf-levels-ins | A fine-tuned regression model that assigns a functioning level to Dutch sentences describing exercise tolerance functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about exercise tolerance functions in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLTL/icf-levels-ins
### Model URL : https://huggingface.co/CLTL/icf-levels-ins
### Model Description : A fine-tuned regression model that assigns a functioning level to Dutch sentences describing exercise tolerance functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about exercise tolerance functions in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD |
CLTL/icf-levels-mbw | https://huggingface.co/CLTL/icf-levels-mbw | A fine-tuned regression model that assigns a functioning level to Dutch sentences describing weight maintenance functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about weight maintenance functions in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLTL/icf-levels-mbw
### Model URL : https://huggingface.co/CLTL/icf-levels-mbw
### Model Description : A fine-tuned regression model that assigns a functioning level to Dutch sentences describing weight maintenance functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about weight maintenance functions in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD |
CLTL/icf-levels-stm | https://huggingface.co/CLTL/icf-levels-stm | A fine-tuned regression model that assigns a functioning level to Dutch sentences describing emotional functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about emotional functions in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CLTL/icf-levels-stm
### Model URL : https://huggingface.co/CLTL/icf-levels-stm
### Model Description : A fine-tuned regression model that assigns a functioning level to Dutch sentences describing emotional functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about emotional functions in clinical text in Dutch, use the icf-domains classification model. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. To generate predictions with the model, use the Simple Transformers library: The prediction on the example is: The raw outputs look like this: The default training parameters of Simple Transformers were used, including: The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). Jenia Kim, Piek Vossen TBD |
CM-CA/Cartman | https://huggingface.co/CM-CA/Cartman | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CM-CA/Cartman
### Model URL : https://huggingface.co/CM-CA/Cartman
### Model Description : No model card New: Create and edit this model card directly on the website! |
CM-CA/DialoGPT-small-cartman | https://huggingface.co/CM-CA/DialoGPT-small-cartman | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CM-CA/DialoGPT-small-cartman
### Model URL : https://huggingface.co/CM-CA/DialoGPT-small-cartman
### Model Description : No model card New: Create and edit this model card directly on the website! |
CNT-UPenn/Bio_ClinicalBERT_for_seizureFreedom_classification | https://huggingface.co/CNT-UPenn/Bio_ClinicalBERT_for_seizureFreedom_classification | emilyalsentzer/Bio_ClinicalBERT with additional training through the finetuning pipeline described in "Extracting Seizure Frequency From Epilepsy Clinic Notes: A Machine Reading Approach To Natural Language Processing." Citation: Kevin Xie, Ryan S Gallagher, Erin C Conrad, Chadric O Garrick, Steven N Baldassano, John M Bernabei, Peter D Galer, Nina J Ghosn, Adam S Greenblatt, Tara Jennings, Alana Kornspun, Catherine V Kulick-Soper, Jal M Panchal, Akash R Pattnaik, Brittany H Scheid, Danmeng Wei, Micah Weitzman, Ramya Muthukrishnan, Joongwon Kim, Brian Litt, Colin A Ellis, Dan Roth, Extracting seizure frequency from epilepsy clinic notes: a machine reading approach to natural language processing, Journal of the American Medical Informatics Association, 2022;, ocac018, https://doi.org/10.1093/jamia/ocac018 Bio_ClinicalBERT_for_seizureFreedom_classification classifies patients has having seizures or being seizure free using the HPI and/or Interval History paragraphs from a medical note. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CNT-UPenn/Bio_ClinicalBERT_for_seizureFreedom_classification
### Model URL : https://huggingface.co/CNT-UPenn/Bio_ClinicalBERT_for_seizureFreedom_classification
### Model Description : emilyalsentzer/Bio_ClinicalBERT with additional training through the finetuning pipeline described in "Extracting Seizure Frequency From Epilepsy Clinic Notes: A Machine Reading Approach To Natural Language Processing." Citation: Kevin Xie, Ryan S Gallagher, Erin C Conrad, Chadric O Garrick, Steven N Baldassano, John M Bernabei, Peter D Galer, Nina J Ghosn, Adam S Greenblatt, Tara Jennings, Alana Kornspun, Catherine V Kulick-Soper, Jal M Panchal, Akash R Pattnaik, Brittany H Scheid, Danmeng Wei, Micah Weitzman, Ramya Muthukrishnan, Joongwon Kim, Brian Litt, Colin A Ellis, Dan Roth, Extracting seizure frequency from epilepsy clinic notes: a machine reading approach to natural language processing, Journal of the American Medical Informatics Association, 2022;, ocac018, https://doi.org/10.1093/jamia/ocac018 Bio_ClinicalBERT_for_seizureFreedom_classification classifies patients has having seizures or being seizure free using the HPI and/or Interval History paragraphs from a medical note. |
CNT-UPenn/RoBERTa_for_seizureFrequency_QA | https://huggingface.co/CNT-UPenn/RoBERTa_for_seizureFrequency_QA | RoBERTa-base with additional training through the finetuning pipeline described in "Extracting Seizure Frequency From Epilepsy Clinic Notes: A Machine Reading Approach To Natural Language Processing." Citation: Kevin Xie, Ryan S Gallagher, Erin C Conrad, Chadric O Garrick, Steven N Baldassano, John M Bernabei, Peter D Galer, Nina J Ghosn, Adam S Greenblatt, Tara Jennings, Alana Kornspun, Catherine V Kulick-Soper, Jal M Panchal, Akash R Pattnaik, Brittany H Scheid, Danmeng Wei, Micah Weitzman, Ramya Muthukrishnan, Joongwon Kim, Brian Litt, Colin A Ellis, Dan Roth, Extracting seizure frequency from epilepsy clinic notes: a machine reading approach to natural language processing, Journal of the American Medical Informatics Association, 2022;, ocac018, https://doi.org/10.1093/jamia/ocac018 RoBERTa_for_seizureFrequency_QA performs extractive question answering to identify a patient's seizure freedom and/or date of last seizure using the HPI and/or Interval History paragraphs from a medical note. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CNT-UPenn/RoBERTa_for_seizureFrequency_QA
### Model URL : https://huggingface.co/CNT-UPenn/RoBERTa_for_seizureFrequency_QA
### Model Description : RoBERTa-base with additional training through the finetuning pipeline described in "Extracting Seizure Frequency From Epilepsy Clinic Notes: A Machine Reading Approach To Natural Language Processing." Citation: Kevin Xie, Ryan S Gallagher, Erin C Conrad, Chadric O Garrick, Steven N Baldassano, John M Bernabei, Peter D Galer, Nina J Ghosn, Adam S Greenblatt, Tara Jennings, Alana Kornspun, Catherine V Kulick-Soper, Jal M Panchal, Akash R Pattnaik, Brittany H Scheid, Danmeng Wei, Micah Weitzman, Ramya Muthukrishnan, Joongwon Kim, Brian Litt, Colin A Ellis, Dan Roth, Extracting seizure frequency from epilepsy clinic notes: a machine reading approach to natural language processing, Journal of the American Medical Informatics Association, 2022;, ocac018, https://doi.org/10.1093/jamia/ocac018 RoBERTa_for_seizureFrequency_QA performs extractive question answering to identify a patient's seizure freedom and/or date of last seizure using the HPI and/or Interval History paragraphs from a medical note. |
CSResearcher/TestModel | https://huggingface.co/CSResearcher/TestModel | null | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CSResearcher/TestModel
### Model URL : https://huggingface.co/CSResearcher/TestModel
### Model Description : |
CSZay/bart | https://huggingface.co/CSZay/bart | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CSZay/bart
### Model URL : https://huggingface.co/CSZay/bart
### Model Description : No model card New: Create and edit this model card directly on the website! |
CTBC/ATS | https://huggingface.co/CTBC/ATS | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CTBC/ATS
### Model URL : https://huggingface.co/CTBC/ATS
### Model Description : No model card New: Create and edit this model card directly on the website! |
CZWin32768/xlm-align | https://huggingface.co/CZWin32768/xlm-align | Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment (ACL-2021, paper, github) XLM-Align is a pretrained cross-lingual language model that supports 94 languages. See details in our paper. XTREME cross-lingual understanding tasks: Contact: chizewen@outlook.com BibTeX: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CZWin32768/xlm-align
### Model URL : https://huggingface.co/CZWin32768/xlm-align
### Model Description : Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment (ACL-2021, paper, github) XLM-Align is a pretrained cross-lingual language model that supports 94 languages. See details in our paper. XTREME cross-lingual understanding tasks: Contact: chizewen@outlook.com BibTeX: |
Caddy/UD | https://huggingface.co/Caddy/UD | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Caddy/UD
### Model URL : https://huggingface.co/Caddy/UD
### Model Description : No model card New: Create and edit this model card directly on the website! |
Calamarii/calamari | https://huggingface.co/Calamarii/calamari | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Calamarii/calamari
### Model URL : https://huggingface.co/Calamarii/calamari
### Model Description : No model card New: Create and edit this model card directly on the website! |
Callidior/bert2bert-base-arxiv-titlegen | https://huggingface.co/Callidior/bert2bert-base-arxiv-titlegen | Generates titles for computer science papers given an abstract. The model is a BERT2BERT Encoder-Decoder using the official bert-base-uncased checkpoint as initialization for the encoder and decoder.
It was fine-tuned on 318,500 computer science papers posted on arXiv.org between 2007 and 2022 and achieved a 26.3% Rouge2 F1-Score on held-out validation data. Live Demo: https://paper-titles.ey.r.appspot.com/ | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Callidior/bert2bert-base-arxiv-titlegen
### Model URL : https://huggingface.co/Callidior/bert2bert-base-arxiv-titlegen
### Model Description : Generates titles for computer science papers given an abstract. The model is a BERT2BERT Encoder-Decoder using the official bert-base-uncased checkpoint as initialization for the encoder and decoder.
It was fine-tuned on 318,500 computer science papers posted on arXiv.org between 2007 and 2022 and achieved a 26.3% Rouge2 F1-Score on held-out validation data. Live Demo: https://paper-titles.ey.r.appspot.com/ |
CallumRai/HansardGPT2 | https://huggingface.co/CallumRai/HansardGPT2 | A PyTorch GPT-2 model trained on hansard from 2019-01-01 to 2020-06-01 For more information see: https://github.com/CallumRai/Hansard/ | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CallumRai/HansardGPT2
### Model URL : https://huggingface.co/CallumRai/HansardGPT2
### Model Description : A PyTorch GPT-2 model trained on hansard from 2019-01-01 to 2020-06-01 For more information see: https://github.com/CallumRai/Hansard/ |
CalvinHuang/mt5-small-finetuned-amazon-en-es | https://huggingface.co/CalvinHuang/mt5-small-finetuned-amazon-en-es | This model is a fine-tuned version of google/mt5-small on the None dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CalvinHuang/mt5-small-finetuned-amazon-en-es
### Model URL : https://huggingface.co/CalvinHuang/mt5-small-finetuned-amazon-en-es
### Model Description : This model is a fine-tuned version of google/mt5-small on the None dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: |
Cameron/BERT-Jigsaw | https://huggingface.co/Cameron/BERT-Jigsaw | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Cameron/BERT-Jigsaw
### Model URL : https://huggingface.co/Cameron/BERT-Jigsaw
### Model Description : No model card New: Create and edit this model card directly on the website! |
Cameron/BERT-SBIC-offensive | https://huggingface.co/Cameron/BERT-SBIC-offensive | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Cameron/BERT-SBIC-offensive
### Model URL : https://huggingface.co/Cameron/BERT-SBIC-offensive
### Model Description : No model card New: Create and edit this model card directly on the website! |
Cameron/BERT-SBIC-targetcategory | https://huggingface.co/Cameron/BERT-SBIC-targetcategory | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Cameron/BERT-SBIC-targetcategory
### Model URL : https://huggingface.co/Cameron/BERT-SBIC-targetcategory
### Model Description : No model card New: Create and edit this model card directly on the website! |
Cameron/BERT-eec-emotion | https://huggingface.co/Cameron/BERT-eec-emotion | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Cameron/BERT-eec-emotion
### Model URL : https://huggingface.co/Cameron/BERT-eec-emotion
### Model Description : No model card New: Create and edit this model card directly on the website! |
Cameron/BERT-jigsaw-identityhate | https://huggingface.co/Cameron/BERT-jigsaw-identityhate | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Cameron/BERT-jigsaw-identityhate
### Model URL : https://huggingface.co/Cameron/BERT-jigsaw-identityhate
### Model Description : No model card New: Create and edit this model card directly on the website! |
Cameron/BERT-jigsaw-severetoxic | https://huggingface.co/Cameron/BERT-jigsaw-severetoxic | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Cameron/BERT-jigsaw-severetoxic
### Model URL : https://huggingface.co/Cameron/BERT-jigsaw-severetoxic
### Model Description : No model card New: Create and edit this model card directly on the website! |
Cameron/BERT-mdgender-convai-binary | https://huggingface.co/Cameron/BERT-mdgender-convai-binary | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Cameron/BERT-mdgender-convai-binary
### Model URL : https://huggingface.co/Cameron/BERT-mdgender-convai-binary
### Model Description : No model card New: Create and edit this model card directly on the website! |
Cameron/BERT-mdgender-convai-ternary | https://huggingface.co/Cameron/BERT-mdgender-convai-ternary | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Cameron/BERT-mdgender-convai-ternary
### Model URL : https://huggingface.co/Cameron/BERT-mdgender-convai-ternary
### Model Description : No model card New: Create and edit this model card directly on the website! |
Cameron/BERT-mdgender-wizard | https://huggingface.co/Cameron/BERT-mdgender-wizard | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Cameron/BERT-mdgender-wizard
### Model URL : https://huggingface.co/Cameron/BERT-mdgender-wizard
### Model Description : No model card New: Create and edit this model card directly on the website! |
Cameron/BERT-rtgender-opgender-annotations | https://huggingface.co/Cameron/BERT-rtgender-opgender-annotations | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Cameron/BERT-rtgender-opgender-annotations
### Model URL : https://huggingface.co/Cameron/BERT-rtgender-opgender-annotations
### Model Description : No model card New: Create and edit this model card directly on the website! |
Camzure/MaamiBot-test | https://huggingface.co/Camzure/MaamiBot-test | null | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Camzure/MaamiBot-test
### Model URL : https://huggingface.co/Camzure/MaamiBot-test
### Model Description : |
Camzure/MaamiBot | https://huggingface.co/Camzure/MaamiBot | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Camzure/MaamiBot
### Model URL : https://huggingface.co/Camzure/MaamiBot
### Model Description : No model card New: Create and edit this model card directly on the website! |
Canadiancaleb/DialoGPT-small-jesse | https://huggingface.co/Canadiancaleb/DialoGPT-small-jesse | null | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Canadiancaleb/DialoGPT-small-jesse
### Model URL : https://huggingface.co/Canadiancaleb/DialoGPT-small-jesse
### Model Description : |
Canadiancaleb/DialoGPT-small-walter | https://huggingface.co/Canadiancaleb/DialoGPT-small-walter | null | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Canadiancaleb/DialoGPT-small-walter
### Model URL : https://huggingface.co/Canadiancaleb/DialoGPT-small-walter
### Model Description : |
Canadiancaleb/jessebot | https://huggingface.co/Canadiancaleb/jessebot | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Canadiancaleb/jessebot
### Model URL : https://huggingface.co/Canadiancaleb/jessebot
### Model Description : No model card New: Create and edit this model card directly on the website! |
Canyonevo/DialoGPT-medium-KingHenry | https://huggingface.co/Canyonevo/DialoGPT-medium-KingHenry | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Canyonevo/DialoGPT-medium-KingHenry
### Model URL : https://huggingface.co/Canyonevo/DialoGPT-medium-KingHenry
### Model Description : No model card New: Create and edit this model card directly on the website! |
CapitainData/wav2vec2-large-xlsr-turkish-demo-colab | https://huggingface.co/CapitainData/wav2vec2-large-xlsr-turkish-demo-colab | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : CapitainData/wav2vec2-large-xlsr-turkish-demo-colab
### Model URL : https://huggingface.co/CapitainData/wav2vec2-large-xlsr-turkish-demo-colab
### Model Description : No model card New: Create and edit this model card directly on the website! |
Capreolus/bert-base-msmarco | https://huggingface.co/Capreolus/bert-base-msmarco | BERT-Base model (google/bert_uncased_L-12_H-768_A-12) fine-tuned on the MS MARCO passage classification task. It is intended to be used as a ForSequenceClassification model; see the Capreolus BERT-MaxP implementation for a usage example. This corresponds to the BERT-Base model used to initialize BERT-MaxP and PARADE variants in PARADE: Passage Representation Aggregation for Document Reranking by Li et al. It was converted from the released TFv1 checkpoint. Please cite the PARADE paper if you use these weights. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Capreolus/bert-base-msmarco
### Model URL : https://huggingface.co/Capreolus/bert-base-msmarco
### Model Description : BERT-Base model (google/bert_uncased_L-12_H-768_A-12) fine-tuned on the MS MARCO passage classification task. It is intended to be used as a ForSequenceClassification model; see the Capreolus BERT-MaxP implementation for a usage example. This corresponds to the BERT-Base model used to initialize BERT-MaxP and PARADE variants in PARADE: Passage Representation Aggregation for Document Reranking by Li et al. It was converted from the released TFv1 checkpoint. Please cite the PARADE paper if you use these weights. |