Model Name
stringlengths 5
122
| URL
stringlengths 28
145
| Crawled Text
stringlengths 1
199k
⌀ | text
stringlengths 180
199k
|
---|---|---|---|
albert/albert-base-v1 | https://huggingface.co/albert/albert-base-v1 | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team. ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : albert/albert-base-v1
### Model URL : https://huggingface.co/albert/albert-base-v1
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team. ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: When fine-tuned on downstream tasks, the ALBERT models achieve the following results: |
albert/albert-base-v2 | https://huggingface.co/albert/albert-base-v2 | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team. ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : albert/albert-base-v2
### Model URL : https://huggingface.co/albert/albert-base-v2
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team. ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: When fine-tuned on downstream tasks, the ALBERT models achieve the following results: |
albert/albert-large-v1 | https://huggingface.co/albert/albert-large-v1 | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team. ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : albert/albert-large-v1
### Model URL : https://huggingface.co/albert/albert-large-v1
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team. ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: When fine-tuned on downstream tasks, the ALBERT models achieve the following results: |
albert/albert-large-v2 | https://huggingface.co/albert/albert-large-v2 | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team. ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : albert/albert-large-v2
### Model URL : https://huggingface.co/albert/albert-large-v2
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team. ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: When fine-tuned on downstream tasks, the ALBERT models achieve the following results: |
albert/albert-xlarge-v1 | https://huggingface.co/albert/albert-xlarge-v1 | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team. ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the xlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : albert/albert-xlarge-v1
### Model URL : https://huggingface.co/albert/albert-xlarge-v1
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team. ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the xlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: When fine-tuned on downstream tasks, the ALBERT models achieve the following results: |
albert/albert-xlarge-v2 | https://huggingface.co/albert/albert-xlarge-v2 | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team. ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the xlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : albert/albert-xlarge-v2
### Model URL : https://huggingface.co/albert/albert-xlarge-v2
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team. ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the xlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: When fine-tuned on downstream tasks, the ALBERT models achieve the following results: |
albert/albert-xxlarge-v1 | https://huggingface.co/albert/albert-xxlarge-v1 | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team. ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the xxlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : albert/albert-xxlarge-v1
### Model URL : https://huggingface.co/albert/albert-xxlarge-v1
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team. ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the xxlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: When fine-tuned on downstream tasks, the ALBERT models achieve the following results: |
albert/albert-xxlarge-v2 | https://huggingface.co/albert/albert-xxlarge-v2 | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team. ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the xxlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: When fine-tuned on downstream tasks, the ALBERT models achieve the following results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : albert/albert-xxlarge-v2
### Model URL : https://huggingface.co/albert/albert-xxlarge-v2
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team. ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the second version of the xxlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: The ALBERT procedure follows the BERT setup. The details of the masking procedure for each sentence are the following: When fine-tuned on downstream tasks, the ALBERT models achieve the following results: |
bert-base-cased-finetuned-mrpc | https://huggingface.co/bert-base-cased-finetuned-mrpc | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-cased-finetuned-mrpc
### Model URL : https://huggingface.co/bert-base-cased-finetuned-mrpc
### Model Description : No model card New: Create and edit this model card directly on the website! |
bert-base-cased | https://huggingface.co/bert-base-cased | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is case-sensitive: it makes a difference between
english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9β1=0.9 and β2=0.999\beta_{2} = 0.999β2=0.999, a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after. When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-cased
### Model URL : https://huggingface.co/bert-base-cased
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is case-sensitive: it makes a difference between
english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9β1=0.9 and β2=0.999\beta_{2} = 0.999β2=0.999, a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after. When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: |
bert-base-chinese | https://huggingface.co/bert-base-chinese | This model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper). This model can be used for masked language modeling CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). [More Information Needed] [More Information Needed] | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-chinese
### Model URL : https://huggingface.co/bert-base-chinese
### Model Description : This model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper). This model can be used for masked language modeling CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). [More Information Needed] [More Information Needed] |
bert-base-german-cased | https://huggingface.co/bert-base-german-cased | Language model: bert-base-casedLanguage: GermanTraining data: Wiki, OpenLegalData, News (~ 12GB)Eval data: Conll03 (NER), GermEval14 (NER), GermEval18 (Classification), GNAD (Classification)Infrastructure: 1x TPU v2Published: Jun 14th, 2019 Update April 3rd, 2020: we updated the vocabulary file on deepset's s3 to conform with the default tokenization of punctuation tokens.
For details see the related FARM issue. If you want to use the old vocab we have also uploaded a "deepset/bert-base-german-cased-oldvocab" model. See https://deepset.ai/german-bert for more details During training we monitored the loss and evaluated different model checkpoints on the following German datasets: Even without thorough hyperparameter tuning, we observed quite stable learning especially for our German model. Multiple restarts with different seeds produced quite similar results. We further evaluated different points during the 9 days of pre-training and were astonished how fast the model converges to the maximally reachable performance. We ran all 5 downstream tasks on 7 different model checkpoints - taken at 0 up to 840k training steps (x-axis in figure below). Most checkpoints are taken from early training where we expected most performance changes. Surprisingly, even a randomly initialized BERT can be trained only on labeled downstream datasets and reach good performance (blue line, GermEval 2018 Coarse task, 795 kB trainset size). We bring NLP to the industry via open source!Our focus: Industry specific language models & large scale QA systems. Some of our work: Get in touch:
Twitter | LinkedIn | Website | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-german-cased
### Model URL : https://huggingface.co/bert-base-german-cased
### Model Description : Language model: bert-base-casedLanguage: GermanTraining data: Wiki, OpenLegalData, News (~ 12GB)Eval data: Conll03 (NER), GermEval14 (NER), GermEval18 (Classification), GNAD (Classification)Infrastructure: 1x TPU v2Published: Jun 14th, 2019 Update April 3rd, 2020: we updated the vocabulary file on deepset's s3 to conform with the default tokenization of punctuation tokens.
For details see the related FARM issue. If you want to use the old vocab we have also uploaded a "deepset/bert-base-german-cased-oldvocab" model. See https://deepset.ai/german-bert for more details During training we monitored the loss and evaluated different model checkpoints on the following German datasets: Even without thorough hyperparameter tuning, we observed quite stable learning especially for our German model. Multiple restarts with different seeds produced quite similar results. We further evaluated different points during the 9 days of pre-training and were astonished how fast the model converges to the maximally reachable performance. We ran all 5 downstream tasks on 7 different model checkpoints - taken at 0 up to 840k training steps (x-axis in figure below). Most checkpoints are taken from early training where we expected most performance changes. Surprisingly, even a randomly initialized BERT can be trained only on labeled downstream datasets and reach good performance (blue line, GermEval 2018 Coarse task, 795 kB trainset size). We bring NLP to the industry via open source!Our focus: Industry specific language models & large scale QA systems. Some of our work: Get in touch:
Twitter | LinkedIn | Website |
bert-base-german-dbmdz-cased | https://huggingface.co/bert-base-german-dbmdz-cased | This model is the same as dbmdz/bert-base-german-cased. See the dbmdz/bert-base-german-cased model card for details on the model. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-german-dbmdz-cased
### Model URL : https://huggingface.co/bert-base-german-dbmdz-cased
### Model Description : This model is the same as dbmdz/bert-base-german-cased. See the dbmdz/bert-base-german-cased model card for details on the model. |
bert-base-german-dbmdz-uncased | https://huggingface.co/bert-base-german-dbmdz-uncased | This model is the same as dbmdz/bert-base-german-uncased. See the dbmdz/bert-base-german-cased model card for details on the model. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-german-dbmdz-uncased
### Model URL : https://huggingface.co/bert-base-german-dbmdz-uncased
### Model Description : This model is the same as dbmdz/bert-base-german-uncased. See the dbmdz/bert-base-german-cased model card for details on the model. |
bert-base-multilingual-cased | https://huggingface.co/bert-base-multilingual-cased | Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
It was introduced in this paper and first released in
this repository. This model is case sensitive: it makes a difference
between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the languages in the training set that can then be used to
extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a
standard classifier using the features produced by the BERT model as inputs. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The BERT model was pretrained on the 104 languages with the largest Wikipedias. You can find the complete list
here. The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a
larger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,
Japanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character. The inputs of the model are then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-multilingual-cased
### Model URL : https://huggingface.co/bert-base-multilingual-cased
### Model Description : Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
It was introduced in this paper and first released in
this repository. This model is case sensitive: it makes a difference
between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the languages in the training set that can then be used to
extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a
standard classifier using the features produced by the BERT model as inputs. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The BERT model was pretrained on the 104 languages with the largest Wikipedias. You can find the complete list
here. The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a
larger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,
Japanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character. The inputs of the model are then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: |
bert-base-multilingual-uncased | https://huggingface.co/bert-base-multilingual-uncased | Pretrained model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
It was introduced in this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the languages in the training set that can then be used to
extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a
standard classifier using the features produced by the BERT model as inputs. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The BERT model was pretrained on the 102 languages with the largest Wikipedias. You can find the complete list
here. The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a
larger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,
Japanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character. The inputs of the model are then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-multilingual-uncased
### Model URL : https://huggingface.co/bert-base-multilingual-uncased
### Model Description : Pretrained model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
It was introduced in this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the languages in the training set that can then be used to
extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a
standard classifier using the features produced by the BERT model as inputs. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The BERT model was pretrained on the 102 languages with the largest Wikipedias. You can find the complete list
here. The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a
larger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,
Japanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character. The inputs of the model are then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: |
bert-base-uncased | https://huggingface.co/bert-base-uncased | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.Chinese and multilingual uncased and cased versions followed shortly after.Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.Other 24 smaller models are released afterward. The detailed release history can be found on the google-research/bert readme on github. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions of a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9β1=0.9 and β2=0.999\beta_{2} = 0.999β2=0.999, a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after. When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-uncased
### Model URL : https://huggingface.co/bert-base-uncased
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.Chinese and multilingual uncased and cased versions followed shortly after.Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.Other 24 smaller models are released afterward. The detailed release history can be found on the google-research/bert readme on github. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions of a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9β1=0.9 and β2=0.999\beta_{2} = 0.999β2=0.999, a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after. When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: |
bert-large-cased-whole-word-masking-finetuned-squad | https://huggingface.co/bert-large-cased-whole-word-masking-finetuned-squad | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is cased: it makes a difference between english and English. Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same. The training is identical -- each masked WordPiece token is predicted independently. After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. This model has the following configuration: This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the task summary of the transformers documentation.## Training data The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9β1=0.9 and β2=0.999\beta_{2} = 0.999β2=0.999, a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after. After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-large-cased-whole-word-masking-finetuned-squad
### Model URL : https://huggingface.co/bert-large-cased-whole-word-masking-finetuned-squad
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is cased: it makes a difference between english and English. Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same. The training is identical -- each masked WordPiece token is predicted independently. After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. This model has the following configuration: This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the task summary of the transformers documentation.## Training data The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9β1=0.9 and β2=0.999\beta_{2} = 0.999β2=0.999, a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after. After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command: |
bert-large-cased-whole-word-masking | https://huggingface.co/bert-large-cased-whole-word-masking | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is cased: it makes a difference between english and English. Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same. The training is identical -- each masked WordPiece token is predicted independently. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9β1=0.9 and β2=0.999\beta_{2} = 0.999β2=0.999, a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after. When fine-tuned on downstream tasks, this model achieves the following results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-large-cased-whole-word-masking
### Model URL : https://huggingface.co/bert-large-cased-whole-word-masking
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is cased: it makes a difference between english and English. Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same. The training is identical -- each masked WordPiece token is predicted independently. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9β1=0.9 and β2=0.999\beta_{2} = 0.999β2=0.999, a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after. When fine-tuned on downstream tasks, this model achieves the following results: |
bert-large-cased | https://huggingface.co/bert-large-cased | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is cased: it makes a difference
between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9β1=0.9 and β2=0.999\beta_{2} = 0.999β2=0.999, a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after. When fine-tuned on downstream tasks, this model achieves the following results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-large-cased
### Model URL : https://huggingface.co/bert-large-cased
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is cased: it makes a difference
between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9β1=0.9 and β2=0.999\beta_{2} = 0.999β2=0.999, a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after. When fine-tuned on downstream tasks, this model achieves the following results: |
bert-large-uncased-whole-word-masking-finetuned-squad | https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English. Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same. The training is identical -- each masked WordPiece token is predicted independently. After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. This model has the following configuration: This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the task summary of the transformers documentation.## Training data The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9β1=0.9 and β2=0.999\beta_{2} = 0.999β2=0.999, a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after. After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command: The results obtained are the following: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-large-uncased-whole-word-masking-finetuned-squad
### Model URL : https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English. Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same. The training is identical -- each masked WordPiece token is predicted independently. After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. This model has the following configuration: This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the task summary of the transformers documentation.## Training data The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9β1=0.9 and β2=0.999\beta_{2} = 0.999β2=0.999, a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after. After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command: The results obtained are the following: |
bert-large-uncased-whole-word-masking | https://huggingface.co/bert-large-uncased-whole-word-masking | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English. Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same. The training is identical -- each masked WordPiece token is predicted independently. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9β1=0.9 and β2=0.999\beta_{2} = 0.999β2=0.999, a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after. When fine-tuned on downstream tasks, this model achieves the following results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-large-uncased-whole-word-masking
### Model URL : https://huggingface.co/bert-large-uncased-whole-word-masking
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English. Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same. The training is identical -- each masked WordPiece token is predicted independently. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9β1=0.9 and β2=0.999\beta_{2} = 0.999β2=0.999, a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after. When fine-tuned on downstream tasks, this model achieves the following results: |
bert-large-uncased | https://huggingface.co/bert-large-uncased | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9β1=0.9 and β2=0.999\beta_{2} = 0.999β2=0.999, a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after. When fine-tuned on downstream tasks, this model achieves the following results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-large-uncased
### Model URL : https://huggingface.co/bert-large-uncased
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team. BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives: This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. This model has the following configuration: You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions: This bias will also affect all fine-tuned versions of this model. The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, β1=0.9\beta_{1} = 0.9β1=0.9 and β2=0.999\beta_{2} = 0.999β2=0.999, a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after. When fine-tuned on downstream tasks, this model achieves the following results: |
almanach/camembert-base | https://huggingface.co/almanach/camembert-base | CamemBERT is a state-of-the-art language model for French based on the RoBERTa model. It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains. For further information or requests, please go to Camembert Website CamemBERT was trained and evaluated by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. If you use our work, please cite: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : almanach/camembert-base
### Model URL : https://huggingface.co/almanach/camembert-base
### Model Description : CamemBERT is a state-of-the-art language model for French based on the RoBERTa model. It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains. For further information or requests, please go to Camembert Website CamemBERT was trained and evaluated by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. If you use our work, please cite: |
Salesforce/ctrl | https://huggingface.co/Salesforce/ctrl | The CTRL model was proposed in CTRL: A Conditional Transformer Language Model for Controllable Generation by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.). The model developers released a model card for CTRL, available here. In their model card, the developers write: The CTRL Language Model analyzed in this card generates text conditioned on control codes that specify domain, style, topics, dates, entities, relationships between entities, plot points, and task-related behavior. The model is a language model. The model can be used for text generation. In their model card, the developers write that the primary intended users are general audiences and NLP Researchers, and that the primary intended uses are: In their model card, the developers write: Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. In their model card, the developers write: We recognize the potential for misuse or abuse, including use by bad actors who could manipulate the system to act maliciously and generate text to influence decision-making in political, economic, and social settings. False attribution could also harm individuals, organizations, or other entities. To address these concerns, the model was evaluated internally as well as externally by third parties, including the Partnership on AI, prior to release. To mitigate potential misuse to the extent possible, we stripped out all detectable training data from undesirable sources. We then redteamed the model and found that negative utterances were often placed in contexts that made them identifiable as such. For example, when using the ‘News’ control code, hate speech could be embedded as part of an apology (e.g. “the politician apologized for saying [insert hateful statement]”), implying that this type of speech was negative. By pre-selecting the available control codes (omitting, for example, Instagram and Twitter from the available domains), we are able to limit the potential for misuse. In releasing our model, we hope to put it into the hands of researchers and prosocial actors so that they can work to control, understand, and potentially combat the negative consequences of such models. We hope that research into detecting fake news and model-generated content of all kinds will be pushed forward by CTRL. It is our belief that these models should become a common tool so researchers can design methods to guard against malicious use and so the public becomes familiar with their existence and patterns of behavior. See the associated paper for further discussions about the ethics of LLMs. In their model card, the developers write: See the CTRL-detector GitHub repo for more on the detector model. In their model card, the developers write: This model is trained on 140 GB of text drawn from a variety of domains: Wikipedia (English, German, Spanish, and French), Project Gutenberg, submissions from 45 subreddits, OpenWebText, a large collection of news data, Amazon Reviews, Europarl and UN data from WMT (En-De, En-Es, En-Fr), question-answer pairs (no context documents) from ELI5, and the MRQA shared task, which includes Stanford Question Answering Dataset, NewsQA, TriviaQA, SearchQA, HotpotQA, and Natural Questions. See the paper for the full list of training data. In the associated paper the developers write: We learn BPE (Sennrich et al., 2015) codes and tokenize the data using fastBPE4, but we use a large vocabulary of roughly 250K tokens. This includes the sub-word tokens necessary to mitigate problems with rare words, but it also reduces the average number of tokens required to generate long text by including most common words. We use English Wikipedia and a 5% split of our collected OpenWebText data for learning BPE codes. We also introduce an unknown token so that during preprocessing we can filter out sequences that contain more than 2 unknown tokens. This, along with the compressed storage for efficient training (TFRecords) (Abadi et al., 2016), reduces our training data to 140 GB from the total 180 GB collected. See the paper for links, references, and further details. In the associated paper the developers write: CTRL has model dimension d = 1280, inner dimension f = 8192, 48 layers, and 16 heads per layer. Dropout with probability 0.1 follows the residual connections in each layer. Token embeddings were tied with the final output embedding layer (Inan et al., 2016; Press & Wolf, 2016). See the paper for links, references, and further details. In their model card, the developers write that model performance measures are: Performance evaluated on qualitative judgments by humans as to whether the control codes lead to text generated in the desired domain Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). Details are pulled from the associated paper. In the associated paper the developers write: CTRL was implemented in TensorFlow (Abadi et al., 2016) and trained with a global batch size of 1024 distributed across 256 cores of a Cloud TPU v3 Pod for 800k iterations. Training took approximately 2 weeks using Adagrad (Duchi et al., 2011) with a linear warmup from 0 to 0.05 over 25k steps. The norm of gradients were clipped to 0.25 as in (Merity et al., 2017). Learning rate decay was not necessary due to the monotonic nature of the Adagrad accumulator. We compared to the Adam optimizer (Kingma & Ba, 2014) while training smaller models, but we noticed comparable convergence rates and significant memory savings with Adagrad. We also experimented with explicit memory-saving optimizers including SM3 (Anil et al., 2019), Adafactor (Shazeer & Stern, 2018), and NovoGrad (Ginsburg et al., 2019) with mixed results. See the paper for links, references, and further details. BibTeX: APA: This model card was written by the team at Hugging Face, referencing the model card released by the developers. Use the code below to get started with the model. See the Hugging Face ctrl docs for more information. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Salesforce/ctrl
### Model URL : https://huggingface.co/Salesforce/ctrl
### Model Description : The CTRL model was proposed in CTRL: A Conditional Transformer Language Model for Controllable Generation by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.). The model developers released a model card for CTRL, available here. In their model card, the developers write: The CTRL Language Model analyzed in this card generates text conditioned on control codes that specify domain, style, topics, dates, entities, relationships between entities, plot points, and task-related behavior. The model is a language model. The model can be used for text generation. In their model card, the developers write that the primary intended users are general audiences and NLP Researchers, and that the primary intended uses are: In their model card, the developers write: Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. In their model card, the developers write: We recognize the potential for misuse or abuse, including use by bad actors who could manipulate the system to act maliciously and generate text to influence decision-making in political, economic, and social settings. False attribution could also harm individuals, organizations, or other entities. To address these concerns, the model was evaluated internally as well as externally by third parties, including the Partnership on AI, prior to release. To mitigate potential misuse to the extent possible, we stripped out all detectable training data from undesirable sources. We then redteamed the model and found that negative utterances were often placed in contexts that made them identifiable as such. For example, when using the ‘News’ control code, hate speech could be embedded as part of an apology (e.g. “the politician apologized for saying [insert hateful statement]”), implying that this type of speech was negative. By pre-selecting the available control codes (omitting, for example, Instagram and Twitter from the available domains), we are able to limit the potential for misuse. In releasing our model, we hope to put it into the hands of researchers and prosocial actors so that they can work to control, understand, and potentially combat the negative consequences of such models. We hope that research into detecting fake news and model-generated content of all kinds will be pushed forward by CTRL. It is our belief that these models should become a common tool so researchers can design methods to guard against malicious use and so the public becomes familiar with their existence and patterns of behavior. See the associated paper for further discussions about the ethics of LLMs. In their model card, the developers write: See the CTRL-detector GitHub repo for more on the detector model. In their model card, the developers write: This model is trained on 140 GB of text drawn from a variety of domains: Wikipedia (English, German, Spanish, and French), Project Gutenberg, submissions from 45 subreddits, OpenWebText, a large collection of news data, Amazon Reviews, Europarl and UN data from WMT (En-De, En-Es, En-Fr), question-answer pairs (no context documents) from ELI5, and the MRQA shared task, which includes Stanford Question Answering Dataset, NewsQA, TriviaQA, SearchQA, HotpotQA, and Natural Questions. See the paper for the full list of training data. In the associated paper the developers write: We learn BPE (Sennrich et al., 2015) codes and tokenize the data using fastBPE4, but we use a large vocabulary of roughly 250K tokens. This includes the sub-word tokens necessary to mitigate problems with rare words, but it also reduces the average number of tokens required to generate long text by including most common words. We use English Wikipedia and a 5% split of our collected OpenWebText data for learning BPE codes. We also introduce an unknown token so that during preprocessing we can filter out sequences that contain more than 2 unknown tokens. This, along with the compressed storage for efficient training (TFRecords) (Abadi et al., 2016), reduces our training data to 140 GB from the total 180 GB collected. See the paper for links, references, and further details. In the associated paper the developers write: CTRL has model dimension d = 1280, inner dimension f = 8192, 48 layers, and 16 heads per layer. Dropout with probability 0.1 follows the residual connections in each layer. Token embeddings were tied with the final output embedding layer (Inan et al., 2016; Press & Wolf, 2016). See the paper for links, references, and further details. In their model card, the developers write that model performance measures are: Performance evaluated on qualitative judgments by humans as to whether the control codes lead to text generated in the desired domain Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). Details are pulled from the associated paper. In the associated paper the developers write: CTRL was implemented in TensorFlow (Abadi et al., 2016) and trained with a global batch size of 1024 distributed across 256 cores of a Cloud TPU v3 Pod for 800k iterations. Training took approximately 2 weeks using Adagrad (Duchi et al., 2011) with a linear warmup from 0 to 0.05 over 25k steps. The norm of gradients were clipped to 0.25 as in (Merity et al., 2017). Learning rate decay was not necessary due to the monotonic nature of the Adagrad accumulator. We compared to the Adam optimizer (Kingma & Ba, 2014) while training smaller models, but we noticed comparable convergence rates and significant memory savings with Adagrad. We also experimented with explicit memory-saving optimizers including SM3 (Anil et al., 2019), Adafactor (Shazeer & Stern, 2018), and NovoGrad (Ginsburg et al., 2019) with mixed results. See the paper for links, references, and further details. BibTeX: APA: This model card was written by the team at Hugging Face, referencing the model card released by the developers. Use the code below to get started with the model. See the Hugging Face ctrl docs for more information. |
distilbert/distilbert-base-cased-distilled-squad | https://huggingface.co/distilbert/distilbert-base-cased-distilled-squad | Model Description: The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT, and the paper DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than bert-base-uncased, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark. This model is a fine-tune checkpoint of DistilBERT-base-cased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1. Use the code below to get started with the model. Here is how to use this model in PyTorch: And in TensorFlow: This model can be used for question answering. The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The distilbert-base-cased model was trained using the same data as the distilbert-base-uncased model. The distilbert-base-uncased model model describes it's training data as: DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers). To learn more about the SQuAD v1.1 dataset, see the SQuAD v1.1 data card. See the distilbert-base-cased model card for further details. See the distilbert-base-cased model card for further details. As discussed in the model repository This model reaches a F1 score of 87.1 on the [SQuAD v1.1] dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7). Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and hours used based on the associated paper. Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD. See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. APA: This model card was written by the Hugging Face team. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilbert-base-cased-distilled-squad
### Model URL : https://huggingface.co/distilbert/distilbert-base-cased-distilled-squad
### Model Description : Model Description: The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT, and the paper DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than bert-base-uncased, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark. This model is a fine-tune checkpoint of DistilBERT-base-cased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1. Use the code below to get started with the model. Here is how to use this model in PyTorch: And in TensorFlow: This model can be used for question answering. The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The distilbert-base-cased model was trained using the same data as the distilbert-base-uncased model. The distilbert-base-uncased model model describes it's training data as: DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers). To learn more about the SQuAD v1.1 dataset, see the SQuAD v1.1 data card. See the distilbert-base-cased model card for further details. See the distilbert-base-cased model card for further details. As discussed in the model repository This model reaches a F1 score of 87.1 on the [SQuAD v1.1] dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7). Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and hours used based on the associated paper. Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD. See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. APA: This model card was written by the Hugging Face team. |
distilbert/distilbert-base-cased | https://huggingface.co/distilbert/distilbert-base-cased | This model is a distilled version of the BERT base model.
It was introduced in this paper.
The code for the distillation process can be found
here.
This model is cased: it does make a difference between english and English. All the training details on the pre-training, the uses, limitations and potential biases (included below) are the same as for DistilBERT-base-uncased.
We highly encourage to check it if you want to know more. DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained
with three objectives: This way, the model learns the same inner representation of the English language than its teacher model, while being
faster for inference or downstream tasks. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
the bias of its teacher model. This bias will also affect all fine-tuned versions of this model. DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset
consisting of 11,038 unpublished books and English Wikipedia
(excluding lists, tables and headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 8 16 GB V100 for 90 hours. See the
training code for all hyperparameters
details. When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilbert-base-cased
### Model URL : https://huggingface.co/distilbert/distilbert-base-cased
### Model Description : This model is a distilled version of the BERT base model.
It was introduced in this paper.
The code for the distillation process can be found
here.
This model is cased: it does make a difference between english and English. All the training details on the pre-training, the uses, limitations and potential biases (included below) are the same as for DistilBERT-base-uncased.
We highly encourage to check it if you want to know more. DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained
with three objectives: This way, the model learns the same inner representation of the English language than its teacher model, while being
faster for inference or downstream tasks. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
the bias of its teacher model. This bias will also affect all fine-tuned versions of this model. DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset
consisting of 11,038 unpublished books and English Wikipedia
(excluding lists, tables and headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 8 16 GB V100 for 90 hours. See the
training code for all hyperparameters
details. When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: |
distilbert/distilbert-base-german-cased | https://huggingface.co/distilbert/distilbert-base-german-cased | null | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilbert-base-german-cased
### Model URL : https://huggingface.co/distilbert/distilbert-base-german-cased
### Model Description : |
distilbert/distilbert-base-multilingual-cased | https://huggingface.co/distilbert/distilbert-base-multilingual-cased | This model is a distilled version of the BERT base multilingual model. The code for the distillation process can be found here. This model is cased: it does make a difference between english and English. The model is trained on the concatenation of Wikipedia in 104 different languages listed here.
The model has 6 layers, 768 dimension and 12 heads, totalizing 134M parameters (compared to 177M parameters for mBERT-base).
On average, this model, referred to as DistilmBERT, is twice as fast as mBERT-base. We encourage potential users of this model to check out the BERT base multilingual model card to learn more about usage, limitations and potential biases. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The model developers report the following accuracy results for DistilmBERT (see GitHub Repo): Here are the results on the test sets for 6 of the languages available in XNLI. The results are computed in the zero shot setting (trained on the English portion and evaluated on the target language portion): Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). APA You can use the model directly with a pipeline for masked language modeling: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilbert-base-multilingual-cased
### Model URL : https://huggingface.co/distilbert/distilbert-base-multilingual-cased
### Model Description : This model is a distilled version of the BERT base multilingual model. The code for the distillation process can be found here. This model is cased: it does make a difference between english and English. The model is trained on the concatenation of Wikipedia in 104 different languages listed here.
The model has 6 layers, 768 dimension and 12 heads, totalizing 134M parameters (compared to 177M parameters for mBERT-base).
On average, this model, referred to as DistilmBERT, is twice as fast as mBERT-base. We encourage potential users of this model to check out the BERT base multilingual model card to learn more about usage, limitations and potential biases. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The model developers report the following accuracy results for DistilmBERT (see GitHub Repo): Here are the results on the test sets for 6 of the languages available in XNLI. The results are computed in the zero shot setting (trained on the English portion and evaluated on the target language portion): Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). APA You can use the model directly with a pipeline for masked language modeling: |
distilbert/distilbert-base-uncased-distilled-squad | https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad | Model Description: The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT, and the paper DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than bert-base-uncased, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark. This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1. Use the code below to get started with the model. Here is how to use this model in PyTorch: And in TensorFlow: This model can be used for question answering. The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The distilbert-base-uncased model model describes it's training data as: DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers). To learn more about the SQuAD v1.1 dataset, see the SQuAD v1.1 data card. See the distilbert-base-uncased model card for further details. See the distilbert-base-uncased model card for further details. As discussed in the model repository This model reaches a F1 score of 86.9 on the [SQuAD v1.1] dev set (for comparison, Bert bert-base-uncased version reaches a F1 score of 88.5). Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and hours used based on the associated paper. Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD. See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. APA: This model card was written by the Hugging Face team. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilbert-base-uncased-distilled-squad
### Model URL : https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad
### Model Description : Model Description: The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT, and the paper DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than bert-base-uncased, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark. This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1. Use the code below to get started with the model. Here is how to use this model in PyTorch: And in TensorFlow: This model can be used for question answering. The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The distilbert-base-uncased model model describes it's training data as: DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers). To learn more about the SQuAD v1.1 dataset, see the SQuAD v1.1 data card. See the distilbert-base-uncased model card for further details. See the distilbert-base-uncased model card for further details. As discussed in the model repository This model reaches a F1 score of 86.9 on the [SQuAD v1.1] dev set (for comparison, Bert bert-base-uncased version reaches a F1 score of 88.5). Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and hours used based on the associated paper. Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD. See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. APA: This model card was written by the Hugging Face team. |
distilbert/distilbert-base-uncased-finetuned-sst-2-english | https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english | Model Description: This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned on SST-2.
This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7). Example of single-label classification:
This model can be used for topic classification. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations. For instance, for sentences like This film was filmed in COUNTRY, this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this colab, Aurélien Géron made an interesting map plotting these probabilities for each country. We strongly advise users to thoroughly probe these aspects on their use-cases in order to evaluate the risks of this model. We recommend looking at the following bias evaluation datasets as a place to start: WinoBias, WinoGender, Stereoset. The authors use the following Stanford Sentiment Treebank(sst2) corpora for the model. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilbert-base-uncased-finetuned-sst-2-english
### Model URL : https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english
### Model Description : Model Description: This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned on SST-2.
This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7). Example of single-label classification:
This model can be used for topic classification. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations. For instance, for sentences like This film was filmed in COUNTRY, this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this colab, Aurélien Géron made an interesting map plotting these probabilities for each country. We strongly advise users to thoroughly probe these aspects on their use-cases in order to evaluate the risks of this model. We recommend looking at the following bias evaluation datasets as a place to start: WinoBias, WinoGender, Stereoset. The authors use the following Stanford Sentiment Treebank(sst2) corpora for the model. |
distilbert/distilbert-base-uncased | https://huggingface.co/distilbert/distilbert-base-uncased | This model is a distilled version of the BERT base model. It was
introduced in this paper. The code for the distillation process can be found
here. This model is uncased: it does
not make a difference between english and English. DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained
with three objectives: This way, the model learns the same inner representation of the English language than its teacher model, while being
faster for inference or downstream tasks. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
the bias of its teacher model. This bias will also affect all fine-tuned versions of this model. DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset
consisting of 11,038 unpublished books and English Wikipedia
(excluding lists, tables and headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 8 16 GB V100 for 90 hours. See the
training code for all hyperparameters
details. When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilbert-base-uncased
### Model URL : https://huggingface.co/distilbert/distilbert-base-uncased
### Model Description : This model is a distilled version of the BERT base model. It was
introduced in this paper. The code for the distillation process can be found
here. This model is uncased: it does
not make a difference between english and English. DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained
with three objectives: This way, the model learns the same inner representation of the English language than its teacher model, while being
faster for inference or downstream tasks. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
the bias of its teacher model. This bias will also affect all fine-tuned versions of this model. DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset
consisting of 11,038 unpublished books and English Wikipedia
(excluding lists, tables and headers). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form: With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: The model was trained on 8 16 GB V100 for 90 hours. See the
training code for all hyperparameters
details. When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: |
distilbert/distilgpt2 | https://huggingface.co/distilbert/distilgpt2 | DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of GPT-2. CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes. As the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context. The impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example: Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model. The developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including: Using DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser. OpenAI states in the GPT-2 model card: Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model. Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: And in TensorFlow: DistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText. The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019). The creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set). Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilgpt2
### Model URL : https://huggingface.co/distilbert/distilgpt2
### Model Description : DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of GPT-2. CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes. As the developers of GPT-2 (OpenAI) note in their model card, “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context. The impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example: Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model. The developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including: Using DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser. OpenAI states in the GPT-2 model card: Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model. Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: And in TensorFlow: DistilGPT2 was trained using OpenWebTextCorpus, an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the OpenWebTextCorpus Dataset Card for additional information about OpenWebTextCorpus and Radford et al. (2019) for additional information about WebText. The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in Sanh et al. (2019). The creators of DistilGPT2 report that, on the WikiText-103 benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set). Carbon emissions were estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. |
distilbert/distilroberta-base | https://huggingface.co/distilbert/distilroberta-base | This model is a distilled version of the RoBERTa-base model. It follows the same training procedure as DistilBERT.
The code for the distillation process can be found here.
This model is case-sensitive: it makes a difference between english and English. The model has 6 layers, 768 dimension and 12 heads, totalizing 82M parameters (compared to 125M parameters for RoBERTa-base).
On average DistilRoBERTa is twice as fast as Roberta-base. We encourage users of this model card to check out the RoBERTa-base model card to learn more about usage, limitations and potential biases. You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. DistilRoBERTa was pre-trained on OpenWebTextCorpus, a reproduction of OpenAI's WebText dataset (it is ~4 times less training data than the teacher RoBERTa). See the roberta-base model card for further details on training. When fine-tuned on downstream tasks, this model achieves the following results (see GitHub Repo): Glue test results: Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). APA You can use the model directly with a pipeline for masked language modeling: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilroberta-base
### Model URL : https://huggingface.co/distilbert/distilroberta-base
### Model Description : This model is a distilled version of the RoBERTa-base model. It follows the same training procedure as DistilBERT.
The code for the distillation process can be found here.
This model is case-sensitive: it makes a difference between english and English. The model has 6 layers, 768 dimension and 12 heads, totalizing 82M parameters (compared to 125M parameters for RoBERTa-base).
On average DistilRoBERTa is twice as fast as Roberta-base. We encourage users of this model card to check out the RoBERTa-base model card to learn more about usage, limitations and potential biases. You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. DistilRoBERTa was pre-trained on OpenWebTextCorpus, a reproduction of OpenAI's WebText dataset (it is ~4 times less training data than the teacher RoBERTa). See the roberta-base model card for further details on training. When fine-tuned on downstream tasks, this model achieves the following results (see GitHub Repo): Glue test results: Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). APA You can use the model directly with a pipeline for masked language modeling: |
openai-community/gpt2-large | https://huggingface.co/openai-community/gpt2-large | Model Description: GPT-2 Large is the 774M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: In their model card about GPT-2, OpenAI wrote: The primary intended users of these models are AI researchers and practitioners. We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models. In their model card about GPT-2, OpenAI wrote: Here are some secondary use cases we believe are likely: In their model card about GPT-2, OpenAI wrote: Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here. The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token i only uses the inputs from 1 to i but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The following evaluation information is extracted from the associated paper. The model authors write in the associated paper that: Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation. The model achieves the following results without any fine-tuning (zero-shot): Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. This model card was written by the Hugging Face team. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : openai-community/gpt2-large
### Model URL : https://huggingface.co/openai-community/gpt2-large
### Model Description : Model Description: GPT-2 Large is the 774M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: In their model card about GPT-2, OpenAI wrote: The primary intended users of these models are AI researchers and practitioners. We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models. In their model card about GPT-2, OpenAI wrote: Here are some secondary use cases we believe are likely: In their model card about GPT-2, OpenAI wrote: Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here. The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token i only uses the inputs from 1 to i but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The following evaluation information is extracted from the associated paper. The model authors write in the associated paper that: Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation. The model achieves the following results without any fine-tuning (zero-shot): Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. This model card was written by the Hugging Face team. |
openai-community/gpt2-medium | https://huggingface.co/openai-community/gpt2-medium | Model Description: GPT-2 Medium is the 355M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: In their model card about GPT-2, OpenAI wrote: The primary intended users of these models are AI researchers and practitioners. We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models. In their model card about GPT-2, OpenAI wrote: Here are some secondary use cases we believe are likely: In their model card about GPT-2, OpenAI wrote: Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here. The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token i only uses the inputs from 1 to i but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The following evaluation information is extracted from the associated paper. The model authors write in the associated paper that: Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation. The model achieves the following results without any fine-tuning (zero-shot): Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. This model card was written by the Hugging Face team. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : openai-community/gpt2-medium
### Model URL : https://huggingface.co/openai-community/gpt2-medium
### Model Description : Model Description: GPT-2 Medium is the 355M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: In their model card about GPT-2, OpenAI wrote: The primary intended users of these models are AI researchers and practitioners. We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models. In their model card about GPT-2, OpenAI wrote: Here are some secondary use cases we believe are likely: In their model card about GPT-2, OpenAI wrote: Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here. The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token i only uses the inputs from 1 to i but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The following evaluation information is extracted from the associated paper. The model authors write in the associated paper that: Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation. The model achieves the following results without any fine-tuning (zero-shot): Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. This model card was written by the Hugging Face team. |
openai-community/gpt2-xl | https://huggingface.co/openai-community/gpt2-xl | Model Description: GPT-2 XL is the 1.5B parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: In their model card about GPT-2, OpenAI wrote: The primary intended users of these models are AI researchers and practitioners. We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models. In their model card about GPT-2, OpenAI wrote: Here are some secondary use cases we believe are likely: In their model card about GPT-2, OpenAI wrote: Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. When they released the 1.5B parameter model, OpenAI wrote in a blog post: GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text. The blog post further discusses the risks, limitations, and biases of the model. The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here. The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token i only uses the inputs from 1 to i but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The following evaluation information is extracted from the associated paper. The model authors write in the associated paper that: Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation. The model achieves the following results without any fine-tuning (zero-shot): Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware type and hours used are based on information provided by one of the model authors on Reddit. See the associated paper for details on the modeling architecture, objective, and training details. This model card was written by the Hugging Face team. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : openai-community/gpt2-xl
### Model URL : https://huggingface.co/openai-community/gpt2-xl
### Model Description : Model Description: GPT-2 XL is the 1.5B parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: In their model card about GPT-2, OpenAI wrote: The primary intended users of these models are AI researchers and practitioners. We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models. In their model card about GPT-2, OpenAI wrote: Here are some secondary use cases we believe are likely: In their model card about GPT-2, OpenAI wrote: Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. When they released the 1.5B parameter model, OpenAI wrote in a blog post: GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text. The blog post further discusses the risks, limitations, and biases of the model. The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here. The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token i only uses the inputs from 1 to i but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The following evaluation information is extracted from the associated paper. The model authors write in the associated paper that: Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation. The model achieves the following results without any fine-tuning (zero-shot): Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware type and hours used are based on information provided by one of the model authors on Reddit. See the associated paper for details on the modeling architecture, objective, and training details. This model card was written by the Hugging Face team. |
openai-community/gpt2 | https://huggingface.co/openai-community/gpt2 | Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
this paper
and first released at this page. Disclaimer: The team releasing GPT-2 also wrote a
model card for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token i only uses the inputs from 1 to i but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt. This is the smallest version of GPT-2, with 124M parameters. Related Models: GPT-Large, GPT-Medium and GPT-XL You can use the raw model for text generation or fine-tune it to a downstream task. See the
model hub to look for fine-tuned versions on a task that interests you. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
model card: Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
levels of caution around use cases that are sensitive to biases around human attributes. Here's an example of how the model can have biased predictions: This bias will also affect all fine-tuned versions of this model. The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training. The model achieves the following results without any fine-tuning (zero-shot): | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : openai-community/gpt2
### Model URL : https://huggingface.co/openai-community/gpt2
### Model Description : Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
this paper
and first released at this page. Disclaimer: The team releasing GPT-2 also wrote a
model card for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token i only uses the inputs from 1 to i but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt. This is the smallest version of GPT-2, with 124M parameters. Related Models: GPT-Large, GPT-Medium and GPT-XL You can use the raw model for text generation or fine-tune it to a downstream task. See the
model hub to look for fine-tuned versions on a task that interests you. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
model card: Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
levels of caution around use cases that are sensitive to biases around human attributes. Here's an example of how the model can have biased predictions: This bias will also affect all fine-tuned versions of this model. The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training. The model achieves the following results without any fine-tuning (zero-shot): |
openai-community/openai-gpt | https://huggingface.co/openai-community/openai-gpt | Model Description: openai-gpt (a.k.a. "GPT-1") is the first transformer-based language model created and released by OpenAI. The model is a causal (unidirectional) transformer pre-trained using language modeling on a large corpus with long range dependencies. Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility: Here is how to use this model in PyTorch: and in TensorFlow: This model can be used for language modeling tasks. Potential downstream uses of this model include tasks that leverage language models. In the associated paper, the model developers discuss evaluations of the model for tasks including natural language inference (NLI), question answering, semantic similarity, and text classification. The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
Predictions generated by this model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: This bias may also affect fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The model developers also wrote in a blog post about risks and limitations of the model, including: The model developers write: We use the BooksCorpus dataset (Zhu et al., 2015) for training the language model. It contains over 7,000 unique unpublished books from a variety of genres including Adventure, Fantasy, and Romance. Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information. The model developers write: Our model largely follows the original transformer work [62]. We trained a 12-layer decoder-only transformer with masked self-attention heads (768 dimensional states and 12 attention heads). For the position-wise feed-forward networks, we used 3072 dimensional inner states. We used the Adam optimization scheme [27] with a max learning rate of 2.5e-4. The learning rate was increased linearly from zero over the first 2000 updates and annealed to 0 using a cosine schedule. We train for 100 epochs on minibatches of 64 randomly sampled, contiguous sequences of 512 tokens. Since layernorm [2] is used extensively throughout the model, a simple weight initialization of N (0, 0.02) was sufficient. We used a bytepair encoding (BPE) vocabulary with 40,000 merges [53] and residual, embedding, and attention dropouts with a rate of 0.1 for regularization. We also employed a modified version of L2 regularization proposed in [37], with w = 0.01 on all non bias or gain weights. For the activation function, we used the Gaussian Error Linear Unit (GELU) [18]. We used learned position embeddings instead of the sinusoidal version proposed in the original work. We use the ftfy library2 to clean the raw text in BooksCorpus, standardize some punctuation and whitespace, and use the spaCy tokenizer. See the paper for further details and links to citations. The following evaluation information is extracted from the associated blog post. See the associated paper for further details. The model developers report that the model was evaluated on the following tasks and datasets using the listed metrics: Task: Textual Entailment Task: Semantic Similarity Task: Reading Comprehension Task: Commonsense Reasoning Task: Sentiment Analysis Task: Linguistic Acceptability Task: Multi Task Benchmark The model achieves the following results without any fine-tuning (zero-shot): The model developers report that: The total compute used to train this model was 0.96 petaflop days (pfs-days). 8 P600 GPU's * 30 days * 12 TFLOPS/GPU * 0.33 utilization = .96 pfs-days Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. APA:
Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. This model card was written by the Hugging Face team. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : openai-community/openai-gpt
### Model URL : https://huggingface.co/openai-community/openai-gpt
### Model Description : Model Description: openai-gpt (a.k.a. "GPT-1") is the first transformer-based language model created and released by OpenAI. The model is a causal (unidirectional) transformer pre-trained using language modeling on a large corpus with long range dependencies. Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility: Here is how to use this model in PyTorch: and in TensorFlow: This model can be used for language modeling tasks. Potential downstream uses of this model include tasks that leverage language models. In the associated paper, the model developers discuss evaluations of the model for tasks including natural language inference (NLI), question answering, semantic similarity, and text classification. The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
Predictions generated by this model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: This bias may also affect fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The model developers also wrote in a blog post about risks and limitations of the model, including: The model developers write: We use the BooksCorpus dataset (Zhu et al., 2015) for training the language model. It contains over 7,000 unique unpublished books from a variety of genres including Adventure, Fantasy, and Romance. Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information. The model developers write: Our model largely follows the original transformer work [62]. We trained a 12-layer decoder-only transformer with masked self-attention heads (768 dimensional states and 12 attention heads). For the position-wise feed-forward networks, we used 3072 dimensional inner states. We used the Adam optimization scheme [27] with a max learning rate of 2.5e-4. The learning rate was increased linearly from zero over the first 2000 updates and annealed to 0 using a cosine schedule. We train for 100 epochs on minibatches of 64 randomly sampled, contiguous sequences of 512 tokens. Since layernorm [2] is used extensively throughout the model, a simple weight initialization of N (0, 0.02) was sufficient. We used a bytepair encoding (BPE) vocabulary with 40,000 merges [53] and residual, embedding, and attention dropouts with a rate of 0.1 for regularization. We also employed a modified version of L2 regularization proposed in [37], with w = 0.01 on all non bias or gain weights. For the activation function, we used the Gaussian Error Linear Unit (GELU) [18]. We used learned position embeddings instead of the sinusoidal version proposed in the original work. We use the ftfy library2 to clean the raw text in BooksCorpus, standardize some punctuation and whitespace, and use the spaCy tokenizer. See the paper for further details and links to citations. The following evaluation information is extracted from the associated blog post. See the associated paper for further details. The model developers report that the model was evaluated on the following tasks and datasets using the listed metrics: Task: Textual Entailment Task: Semantic Similarity Task: Reading Comprehension Task: Commonsense Reasoning Task: Sentiment Analysis Task: Linguistic Acceptability Task: Multi Task Benchmark The model achieves the following results without any fine-tuning (zero-shot): The model developers report that: The total compute used to train this model was 0.96 petaflop days (pfs-days). 8 P600 GPU's * 30 days * 12 TFLOPS/GPU * 0.33 utilization = .96 pfs-days Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. APA:
Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. This model card was written by the Hugging Face team. |
openai-community/roberta-base-openai-detector | https://huggingface.co/openai-community/roberta-base-openai-detector | Model Description: RoBERTa base OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa base model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the largest GPT-2 model, the 1.5B parameter version. The model is a classifier that can be used to detect text generated by GPT-2 models. However, it is strongly suggested not to use it as a ChatGPT detector for the purposes of making grave allegations of academic misconduct against undergraduates and others, as this model might give inaccurate results in the case of ChatGPT-generated input. The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the associated paper for further discussion. The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their associated paper, suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model. CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. In their associated paper, the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research. In a related blog post, the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write: We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective. The model developers also report finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by RoBERTa base and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the RoBERTa base and GPT-2 XL model cards for more information). The developers of this model discuss these issues further in their paper. The model is a sequence classifier based on RoBERTa base (see the RoBERTa base model card for more details on the RoBERTa base training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available here). The model developers write that: We based a sequence classifier on RoBERTaBASE (125 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model. They later state: To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance. See the associated paper for further details on the training procedure. The following evaluation information is extracted from the associated paper. The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by: testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training. The model developers find: Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling (Holtzman et al., 2019. Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling. See the associated paper, Figure 1 (on page 14) and Figure 2 (on page 16) for full results. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The model developers write that: See the associated paper for further details on the modeling architecture and training details. APA: https://huggingface.co/papers/1908.09203 This model card was written by the team at Hugging Face. This model can be instantiated and run with a Transformers pipeline: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : openai-community/roberta-base-openai-detector
### Model URL : https://huggingface.co/openai-community/roberta-base-openai-detector
### Model Description : Model Description: RoBERTa base OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa base model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the largest GPT-2 model, the 1.5B parameter version. The model is a classifier that can be used to detect text generated by GPT-2 models. However, it is strongly suggested not to use it as a ChatGPT detector for the purposes of making grave allegations of academic misconduct against undergraduates and others, as this model might give inaccurate results in the case of ChatGPT-generated input. The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the associated paper for further discussion. The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their associated paper, suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model. CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. In their associated paper, the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research. In a related blog post, the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write: We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective. The model developers also report finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by RoBERTa base and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the RoBERTa base and GPT-2 XL model cards for more information). The developers of this model discuss these issues further in their paper. The model is a sequence classifier based on RoBERTa base (see the RoBERTa base model card for more details on the RoBERTa base training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available here). The model developers write that: We based a sequence classifier on RoBERTaBASE (125 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model. They later state: To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance. See the associated paper for further details on the training procedure. The following evaluation information is extracted from the associated paper. The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by: testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training. The model developers find: Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling (Holtzman et al., 2019. Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling. See the associated paper, Figure 1 (on page 14) and Figure 2 (on page 16) for full results. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The model developers write that: See the associated paper for further details on the modeling architecture and training details. APA: https://huggingface.co/papers/1908.09203 This model card was written by the team at Hugging Face. This model can be instantiated and run with a Transformers pipeline: |
FacebookAI/roberta-base | https://huggingface.co/FacebookAI/roberta-base | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is case-sensitive: it
makes a difference between english and English. Disclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by
the Hugging Face team. RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the model hub to look for fine-tuned versions on a task that
interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at a model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions: This bias will also affect all fine-tuned versions of this model. The RoBERTa model was pretrained on the reunion of five datasets: Together these datasets weigh 160GB of text. The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of
the model take pieces of 512 contiguous tokens that may span over documents. The beginning of a new document is marked
with <s> and the end of one by </s> The details of the masking procedure for each sentence are the following: Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The
optimizer used is Adam with a learning rate of 6e-4, β1=0.9\beta_{1} = 0.9β1=0.9, β2=0.98\beta_{2} = 0.98β2=0.98 and ϵ=1e−6\epsilon = 1e-6ϵ=1e−6, a weight decay of 0.01, learning rate warmup for 24,000 steps and linear decay of the learning
rate after. When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/roberta-base
### Model URL : https://huggingface.co/FacebookAI/roberta-base
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is case-sensitive: it
makes a difference between english and English. Disclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by
the Hugging Face team. RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the model hub to look for fine-tuned versions on a task that
interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at a model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions: This bias will also affect all fine-tuned versions of this model. The RoBERTa model was pretrained on the reunion of five datasets: Together these datasets weigh 160GB of text. The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of
the model take pieces of 512 contiguous tokens that may span over documents. The beginning of a new document is marked
with <s> and the end of one by </s> The details of the masking procedure for each sentence are the following: Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The
optimizer used is Adam with a learning rate of 6e-4, β1=0.9\beta_{1} = 0.9β1=0.9, β2=0.98\beta_{2} = 0.98β2=0.98 and ϵ=1e−6\epsilon = 1e-6ϵ=1e−6, a weight decay of 0.01, learning rate warmup for 24,000 steps and linear decay of the learning
rate after. When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: |
FacebookAI/roberta-large-mnli | https://huggingface.co/FacebookAI/roberta-large-mnli | Model Description: roberta-large-mnli is the RoBERTa large model fine-tuned on the Multi-Genre Natural Language Inference (MNLI) corpus. The model is a pretrained model on English language text using a masked language modeling (MLM) objective. Use the code below to get started with the model. The model can be loaded with the zero-shot-classification pipeline like so: You can then use this pipeline to classify sequences into any of the class names you specify. For example: This fine-tuned model can be used for zero-shot classification tasks, including zero-shot sentence-pair classification (see the GitHub repo for examples) and zero-shot sequence classification. The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). The RoBERTa large model card notes that: "The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral." Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. This model was fine-tuned on the Multi-Genre Natural Language Inference (MNLI) corpus. Also see the MNLI data card for more information. As described in the RoBERTa large model card: The RoBERTa model was pretrained on the reunion of five datasets: Together theses datasets weight 160GB of text. Also see the bookcorpus data card and the wikipedia data card for additional information. As described in the RoBERTa large model card: The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with <s> and the end of one by </s> The details of the masking procedure for each sentence are the following: Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). Also as described in the RoBERTa large model card: The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The
optimizer used is Adam with a learning rate of 4e-4, β1=0.9\beta_{1} = 0.9β1=0.9, β2=0.98\beta_{2} = 0.98β2=0.98 and
ϵ=1e−6\epsilon = 1e-6ϵ=1e−6, a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning
rate after. The following evaluation information is extracted from the associated GitHub repo for RoBERTa. The model developers report that the model was evaluated on the following tasks and datasets using the listed metrics: Dataset: Part of GLUE (Wang et al., 2019), the General Language Understanding Evaluation benchmark, a collection of 9 datasets for evaluating natural language understanding systems. Specifically, the model was evaluated on the Multi-Genre Natural Language Inference (MNLI) corpus. See the GLUE data card or Wang et al. (2019) for further information. The Multi-Genre Natural Language Inference Corpus (Williams et al., 2018) is a crowd-sourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. We use the standard test set, for which we obtained private labels from the authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) sections. We also use and recommend the SNLI corpus (Bowman et al., 2015) as 550k examples of auxiliary training data. Dataset: XNLI (Conneau et al., 2018), the extension of the Multi-Genre Natural Language Inference (MNLI) corpus to 15 languages: English, French, Spanish, German, Greek, Bulgarian, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, Hindi, Swahili and Urdu. See the XNLI data card or Conneau et al. (2018) for further information. GLUE test results (dev set, single model, single-task fine-tuning): 90.2 on MNLI XNLI test results: Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and hours used based on the associated paper. See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/roberta-large-mnli
### Model URL : https://huggingface.co/FacebookAI/roberta-large-mnli
### Model Description : Model Description: roberta-large-mnli is the RoBERTa large model fine-tuned on the Multi-Genre Natural Language Inference (MNLI) corpus. The model is a pretrained model on English language text using a masked language modeling (MLM) objective. Use the code below to get started with the model. The model can be loaded with the zero-shot-classification pipeline like so: You can then use this pipeline to classify sequences into any of the class names you specify. For example: This fine-tuned model can be used for zero-shot classification tasks, including zero-shot sentence-pair classification (see the GitHub repo for examples) and zero-shot sequence classification. The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). The RoBERTa large model card notes that: "The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral." Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. This model was fine-tuned on the Multi-Genre Natural Language Inference (MNLI) corpus. Also see the MNLI data card for more information. As described in the RoBERTa large model card: The RoBERTa model was pretrained on the reunion of five datasets: Together theses datasets weight 160GB of text. Also see the bookcorpus data card and the wikipedia data card for additional information. As described in the RoBERTa large model card: The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with <s> and the end of one by </s> The details of the masking procedure for each sentence are the following: Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). Also as described in the RoBERTa large model card: The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The
optimizer used is Adam with a learning rate of 4e-4, β1=0.9\beta_{1} = 0.9β1=0.9, β2=0.98\beta_{2} = 0.98β2=0.98 and
ϵ=1e−6\epsilon = 1e-6ϵ=1e−6, a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning
rate after. The following evaluation information is extracted from the associated GitHub repo for RoBERTa. The model developers report that the model was evaluated on the following tasks and datasets using the listed metrics: Dataset: Part of GLUE (Wang et al., 2019), the General Language Understanding Evaluation benchmark, a collection of 9 datasets for evaluating natural language understanding systems. Specifically, the model was evaluated on the Multi-Genre Natural Language Inference (MNLI) corpus. See the GLUE data card or Wang et al. (2019) for further information. The Multi-Genre Natural Language Inference Corpus (Williams et al., 2018) is a crowd-sourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. We use the standard test set, for which we obtained private labels from the authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) sections. We also use and recommend the SNLI corpus (Bowman et al., 2015) as 550k examples of auxiliary training data. Dataset: XNLI (Conneau et al., 2018), the extension of the Multi-Genre Natural Language Inference (MNLI) corpus to 15 languages: English, French, Spanish, German, Greek, Bulgarian, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, Hindi, Swahili and Urdu. See the XNLI data card or Conneau et al. (2018) for further information. GLUE test results (dev set, single model, single-task fine-tuning): 90.2 on MNLI XNLI test results: Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and hours used based on the associated paper. See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. |
openai-community/roberta-large-openai-detector | https://huggingface.co/openai-community/roberta-large-openai-detector | Model Description: RoBERTa large OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa large model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the largest GPT-2 model, the 1.5B parameter version. The model is a classifier that can be used to detect text generated by GPT-2 models. The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the associated paper for further discussion. The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their associated paper, suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model. CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. In their associated paper, the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research. In a related blog post, the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write: We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective. The model developers also report finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by RoBERTa large and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the RoBERTa large and GPT-2 XL model cards for more information). The developers of this model discuss these issues further in their paper. The model is a sequence classifier based on RoBERTa large (see the RoBERTa large model card for more details on the RoBERTa large training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available here). The model developers write that: We based a sequence classifier on RoBERTaLARGE (355 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model. They later state: To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance. See the associated paper for further details on the training procedure. The following evaluation information is extracted from the associated paper. The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by: testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training. The model developers find: Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling (Holtzman et al., 2019. Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling. See the associated paper, Figure 1 (on page 14) and Figure 2 (on page 16) for full results. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The model developers write that: See the associated paper for further details on the modeling architecture and training details. APA: https://huggingface.co/papers/1908.09203 This model card was written by the team at Hugging Face. More information needed | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : openai-community/roberta-large-openai-detector
### Model URL : https://huggingface.co/openai-community/roberta-large-openai-detector
### Model Description : Model Description: RoBERTa large OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa large model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the largest GPT-2 model, the 1.5B parameter version. The model is a classifier that can be used to detect text generated by GPT-2 models. The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the associated paper for further discussion. The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their associated paper, suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model. CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. In their associated paper, the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research. In a related blog post, the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write: We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective. The model developers also report finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by RoBERTa large and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the RoBERTa large and GPT-2 XL model cards for more information). The developers of this model discuss these issues further in their paper. The model is a sequence classifier based on RoBERTa large (see the RoBERTa large model card for more details on the RoBERTa large training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available here). The model developers write that: We based a sequence classifier on RoBERTaLARGE (355 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model. They later state: To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance. See the associated paper for further details on the training procedure. The following evaluation information is extracted from the associated paper. The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by: testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training. The model developers find: Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling (Holtzman et al., 2019. Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling. See the associated paper, Figure 1 (on page 14) and Figure 2 (on page 16) for full results. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The model developers write that: See the associated paper for further details on the modeling architecture and training details. APA: https://huggingface.co/papers/1908.09203 This model card was written by the team at Hugging Face. More information needed |
FacebookAI/roberta-large | https://huggingface.co/FacebookAI/roberta-large | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is case-sensitive: it
makes a difference between english and English. Disclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by
the Hugging Face team. RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the model hub to look for fine-tuned versions on a task that
interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions: This bias will also affect all fine-tuned versions of this model. The RoBERTa model was pretrained on the reunion of five datasets: Together theses datasets weight 160GB of text. The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with <s> and the end of one by </s> The details of the masking procedure for each sentence are the following: 15% of the tokens are masked. In 80% of the cases, the masked tokens are replaced by <mask>. In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The
optimizer used is Adam with a learning rate of 4e-4, β1=0.9\beta_{1} = 0.9β1=0.9, β2=0.98\beta_{2} = 0.98β2=0.98 and
ϵ=1e−6\epsilon = 1e-6ϵ=1e−6, a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning
rate after. When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/roberta-large
### Model URL : https://huggingface.co/FacebookAI/roberta-large
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is case-sensitive: it
makes a difference between english and English. Disclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by
the Hugging Face team. RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs. You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the model hub to look for fine-tuned versions on a task that
interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions: This bias will also affect all fine-tuned versions of this model. The RoBERTa model was pretrained on the reunion of five datasets: Together theses datasets weight 160GB of text. The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50,000. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with <s> and the end of one by </s> The details of the masking procedure for each sentence are the following: 15% of the tokens are masked. In 80% of the cases, the masked tokens are replaced by <mask>. In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). The model was trained on 1024 V100 GPUs for 500K steps with a batch size of 8K and a sequence length of 512. The
optimizer used is Adam with a learning rate of 4e-4, β1=0.9\beta_{1} = 0.9β1=0.9, β2=0.98\beta_{2} = 0.98β2=0.98 and
ϵ=1e−6\epsilon = 1e-6ϵ=1e−6, a weight decay of 0.01, learning rate warmup for 30,000 steps and linear decay of the learning
rate after. When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: |
google-t5/t5-11b | https://huggingface.co/google-t5/t5-11b | The developers of the Text-To-Text Transfer Transformer (T5) write: With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-11B is the checkpoint with 11 billion parameters. The developers write in a blog post that the model: Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the blog post and research paper for further details. More information needed. More information needed. More information needed. The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5. The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
Thereby, the following datasets were being used for (1.) and (2.): In their abstract, the model developers write: In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details. The developers evaluated the model on 24 tasks, see the research paper for full details. For full results for T5-11B, see the research paper, Table 14. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). BibTeX: APA: This model card was written by the team at Hugging Face. Before transformers v3.5.0, due do its immense size, t5-11b required some special treatment.
If you're using transformers <= v3.4.0, t5-11b should be loaded with flag use_cdn set to False as follows: Secondly, a single GPU will most likely not have enough memory to even load the model into memory as the weights alone amount to over 40 GB. See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more context. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : google-t5/t5-11b
### Model URL : https://huggingface.co/google-t5/t5-11b
### Model Description : The developers of the Text-To-Text Transfer Transformer (T5) write: With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-11B is the checkpoint with 11 billion parameters. The developers write in a blog post that the model: Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the blog post and research paper for further details. More information needed. More information needed. More information needed. The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5. The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
Thereby, the following datasets were being used for (1.) and (2.): In their abstract, the model developers write: In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details. The developers evaluated the model on 24 tasks, see the research paper for full details. For full results for T5-11B, see the research paper, Table 14. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). BibTeX: APA: This model card was written by the team at Hugging Face. Before transformers v3.5.0, due do its immense size, t5-11b required some special treatment.
If you're using transformers <= v3.4.0, t5-11b should be loaded with flag use_cdn set to False as follows: Secondly, a single GPU will most likely not have enough memory to even load the model into memory as the weights alone amount to over 40 GB. See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more context. |
google-t5/t5-3b | https://huggingface.co/google-t5/t5-3b | The developers of the Text-To-Text Transfer Transformer (T5) write: With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-3B is the checkpoint with 3 billion parameters. The developers write in a blog post that the model: Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the blog post and research paper for further details. More information needed. More information needed. More information needed. The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5. The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
Thereby, the following datasets were being used for (1.) and (2.): In their abstract, the model developers write: In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details. The developers evaluated the model on 24 tasks, see the research paper for full details. For full results for T5-3B, see the research paper, Table 14. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). BibTeX: APA: This model card was written by the team at Hugging Face. See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more context on how to get started with this checkpoint. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : google-t5/t5-3b
### Model URL : https://huggingface.co/google-t5/t5-3b
### Model Description : The developers of the Text-To-Text Transfer Transformer (T5) write: With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-3B is the checkpoint with 3 billion parameters. The developers write in a blog post that the model: Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the blog post and research paper for further details. More information needed. More information needed. More information needed. The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5. The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
Thereby, the following datasets were being used for (1.) and (2.): In their abstract, the model developers write: In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details. The developers evaluated the model on 24 tasks, see the research paper for full details. For full results for T5-3B, see the research paper, Table 14. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). BibTeX: APA: This model card was written by the team at Hugging Face. See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more context on how to get started with this checkpoint. |
google-t5/t5-base | https://huggingface.co/google-t5/t5-base | The developers of the Text-To-Text Transfer Transformer (T5) write: With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-Base is the checkpoint with 220 million parameters. The developers write in a blog post that the model: Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the blog post and research paper for further details. More information needed. More information needed. More information needed. The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5. The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
Thereby, the following datasets were being used for (1.) and (2.): In their abstract, the model developers write: In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details. The developers evaluated the model on 24 tasks, see the research paper for full details. For full results for T5-Base, see the research paper, Table 14. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : google-t5/t5-base
### Model URL : https://huggingface.co/google-t5/t5-base
### Model Description : The developers of the Text-To-Text Transfer Transformer (T5) write: With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-Base is the checkpoint with 220 million parameters. The developers write in a blog post that the model: Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the blog post and research paper for further details. More information needed. More information needed. More information needed. The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5. The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
Thereby, the following datasets were being used for (1.) and (2.): In their abstract, the model developers write: In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details. The developers evaluated the model on 24 tasks, see the research paper for full details. For full results for T5-Base, see the research paper, Table 14. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples. |
google-t5/t5-large | https://huggingface.co/google-t5/t5-large | The developers of the Text-To-Text Transfer Transformer (T5) write: With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-Large is the checkpoint with 770 million parameters. The developers write in a blog post that the model: Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the blog post and research paper for further details. More information needed. More information needed. More information needed. The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5. The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
Thereby, the following datasets were being used for (1.) and (2.): In their abstract, the model developers write: In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details. The developers evaluated the model on 24 tasks, see the research paper for full details. For full results for T5-Large, see the research paper, Table 14. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : google-t5/t5-large
### Model URL : https://huggingface.co/google-t5/t5-large
### Model Description : The developers of the Text-To-Text Transfer Transformer (T5) write: With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-Large is the checkpoint with 770 million parameters. The developers write in a blog post that the model: Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the blog post and research paper for further details. More information needed. More information needed. More information needed. The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5. The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
Thereby, the following datasets were being used for (1.) and (2.): In their abstract, the model developers write: In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details. The developers evaluated the model on 24 tasks, see the research paper for full details. For full results for T5-Large, see the research paper, Table 14. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples. |
google-t5/t5-small | https://huggingface.co/google-t5/t5-small | The developers of the Text-To-Text Transfer Transformer (T5) write: With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-Small is the checkpoint with 60 million parameters. The developers write in a blog post that the model: Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the blog post and research paper for further details. More information needed. More information needed. More information needed. The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5. The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
Thereby, the following datasets were being used for (1.) and (2.): In their abstract, the model developers write: In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details. The developers evaluated the model on 24 tasks, see the research paper for full details. For full results for T5-small, see the research paper, Table 14. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : google-t5/t5-small
### Model URL : https://huggingface.co/google-t5/t5-small
### Model Description : The developers of the Text-To-Text Transfer Transformer (T5) write: With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task. T5-Small is the checkpoint with 60 million parameters. The developers write in a blog post that the model: Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself. See the blog post and research paper for further details. More information needed. More information needed. More information needed. The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5. The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
Thereby, the following datasets were being used for (1.) and (2.): In their abstract, the model developers write: In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the research paper for further details. The developers evaluated the model on 24 tasks, see the research paper for full details. For full results for T5-small, see the research paper, Table 14. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples. |
transfo-xl/transfo-xl-wt103 | https://huggingface.co/transfo-xl/transfo-xl-wt103 | Model Description:
The Transformer-XL model is a causal (uni-directional) transformer with relative positioning (sinusoïdal) embeddings which can reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax inputs and outputs (tied). This model can be used for text generation.
The authors provide additionally notes about the vocabulary used, in the associated paper: We envision interesting applications of Transformer-XL in the fields of text generation, unsupervised feature learning, image and speech modeling. The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). The authors provide additionally notes about the vocabulary used, in the associated paper: best model trained the Wikitext-103 dataset. We seed the our Transformer-XL with a context of at most 512 consecutive tokens randomly sampled from the test set of Wikitext-103. Then, we run Transformer-XL to generate a pre-defined number of tokens (500 or 1,000 in our case). For each generation step, we first find the top-40 probabilities of the next-step distribution and sample from top-40 tokens based on the re-normalized distribution. To help reading, we detokenize the context, the generated text and the reference text. The authors use the following pretraining corpora for the model, described in the associated paper: The authors provide additionally notes about the training procedure used, in the associated paper: Similar to but different from enwik8, text8 con- tains 100M processed Wikipedia characters cre- ated by lowering case the text and removing any character other than the 26 letters a through z, and space. Due to the similarity, we simply adapt the best model and the same hyper-parameters on en- wik8 to text8 without further tuning. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : transfo-xl/transfo-xl-wt103
### Model URL : https://huggingface.co/transfo-xl/transfo-xl-wt103
### Model Description : Model Description:
The Transformer-XL model is a causal (uni-directional) transformer with relative positioning (sinusoïdal) embeddings which can reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax inputs and outputs (tied). This model can be used for text generation.
The authors provide additionally notes about the vocabulary used, in the associated paper: We envision interesting applications of Transformer-XL in the fields of text generation, unsupervised feature learning, image and speech modeling. The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). The authors provide additionally notes about the vocabulary used, in the associated paper: best model trained the Wikitext-103 dataset. We seed the our Transformer-XL with a context of at most 512 consecutive tokens randomly sampled from the test set of Wikitext-103. Then, we run Transformer-XL to generate a pre-defined number of tokens (500 or 1,000 in our case). For each generation step, we first find the top-40 probabilities of the next-step distribution and sample from top-40 tokens based on the re-normalized distribution. To help reading, we detokenize the context, the generated text and the reference text. The authors use the following pretraining corpora for the model, described in the associated paper: The authors provide additionally notes about the training procedure used, in the associated paper: Similar to but different from enwik8, text8 con- tains 100M processed Wikipedia characters cre- ated by lowering case the text and removing any character other than the 26 letters a through z, and space. Due to the similarity, we simply adapt the best model and the same hyper-parameters on en- wik8 to text8 without further tuning. |
FacebookAI/xlm-clm-ende-1024 | https://huggingface.co/FacebookAI/xlm-clm-ende-1024 | The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-clm-ende-1024 is a transformer pretrained using a causal language modeling (CLM) objective (next token prediction) for English-German. The model is a language model. The model can be used for causal language modeling. To learn more about this task and potential downstream uses, see the Hugging Face Multilingual Models for Inference docs. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. See the associated paper for details on the training data and training procedure. See the associated paper for details on the testing data, factors and metrics. For xlm-clm-ende-1024 results, see Table 2 of the associated paper. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The model developers write: We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/xlm-clm-ende-1024
### Model URL : https://huggingface.co/FacebookAI/xlm-clm-ende-1024
### Model Description : The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-clm-ende-1024 is a transformer pretrained using a causal language modeling (CLM) objective (next token prediction) for English-German. The model is a language model. The model can be used for causal language modeling. To learn more about this task and potential downstream uses, see the Hugging Face Multilingual Models for Inference docs. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. See the associated paper for details on the training data and training procedure. See the associated paper for details on the testing data, factors and metrics. For xlm-clm-ende-1024 results, see Table 2 of the associated paper. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The model developers write: We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. |
FacebookAI/xlm-clm-enfr-1024 | https://huggingface.co/FacebookAI/xlm-clm-enfr-1024 | The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-clm-enfr-1024 is a transformer pretrained using a causal language modeling (CLM) objective (next token prediction) for English-French. The model is a language model. The model can be used for causal language modeling (next token prediction). To learn more about this task and potential downstream uses, see the Hugging Face Multilingual Models for Inference docs. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. See the associated paper for details on the training data and training procedure. See the associated paper for details on the testing data, factors and metrics. For xlm-clm-enfr-1024 results, see Table 2 of the associated paper. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The model developers write: We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/xlm-clm-enfr-1024
### Model URL : https://huggingface.co/FacebookAI/xlm-clm-enfr-1024
### Model Description : The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-clm-enfr-1024 is a transformer pretrained using a causal language modeling (CLM) objective (next token prediction) for English-French. The model is a language model. The model can be used for causal language modeling (next token prediction). To learn more about this task and potential downstream uses, see the Hugging Face Multilingual Models for Inference docs. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. See the associated paper for details on the training data and training procedure. See the associated paper for details on the testing data, factors and metrics. For xlm-clm-enfr-1024 results, see Table 2 of the associated paper. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The model developers write: We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. |
FacebookAI/xlm-mlm-100-1280 | https://huggingface.co/FacebookAI/xlm-mlm-100-1280 | xlm-mlm-100-1280 is the XLM model, which was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau, trained on Wikipedia text in 100 languages. The model is a transformer pretrained using a masked language modeling (MLM) objective. The model is a language model. The model can be used for masked language modeling. To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. Also see the associated paper. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. This model is the XLM model trained on Wikipedia text in 100 languages. The preprocessing included tokenization with byte-pair-encoding. See the GitHub repo and the associated paper for further details on the training data and training procedure. Conneau et al. (2020) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7). The model developers evaluated the model on the XNLI cross-lingual classification task (see the XNLI data card for more details on XNLI) using the metric of test accuracy. See the GitHub Repo for further details on the testing data, factors and metrics. For xlm-mlm-100-1280, the test accuracy on the XNLI cross-lingual classification task in English (en), Spanish (es), German (de), Arabic (ar), Chinese (zh) and Urdu (ur) are: See the GitHub repo for further details. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). Conneau et al. (2020) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7). BibTeX: APA: This model card was written by the team at Hugging Face. More information needed. See the ipython notebook in the associated GitHub repo for examples. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/xlm-mlm-100-1280
### Model URL : https://huggingface.co/FacebookAI/xlm-mlm-100-1280
### Model Description : xlm-mlm-100-1280 is the XLM model, which was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau, trained on Wikipedia text in 100 languages. The model is a transformer pretrained using a masked language modeling (MLM) objective. The model is a language model. The model can be used for masked language modeling. To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. Also see the associated paper. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. This model is the XLM model trained on Wikipedia text in 100 languages. The preprocessing included tokenization with byte-pair-encoding. See the GitHub repo and the associated paper for further details on the training data and training procedure. Conneau et al. (2020) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7). The model developers evaluated the model on the XNLI cross-lingual classification task (see the XNLI data card for more details on XNLI) using the metric of test accuracy. See the GitHub Repo for further details on the testing data, factors and metrics. For xlm-mlm-100-1280, the test accuracy on the XNLI cross-lingual classification task in English (en), Spanish (es), German (de), Arabic (ar), Chinese (zh) and Urdu (ur) are: See the GitHub repo for further details. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). Conneau et al. (2020) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7). BibTeX: APA: This model card was written by the team at Hugging Face. More information needed. See the ipython notebook in the associated GitHub repo for examples. |
FacebookAI/xlm-mlm-17-1280 | https://huggingface.co/FacebookAI/xlm-mlm-17-1280 | xlm-mlm-17-1280 is the XLM model, which was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau, trained on text in 17 languages. The model is a transformer pretrained using a masked language modeling (MLM) objective. The model is a language model. The model can be used for masked language modeling. To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. Also see the associated paper. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. This model is the XLM model trained on text in 17 languages. The preprocessing included tokenization and byte-pair-encoding. See the GitHub repo and the associated paper for further details on the training data and training procedure. Conneau et al. (2020) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7). The model developers evaluated the model on the XNLI cross-lingual classification task (see the XNLI data card for more details on XNLI) using the metric of test accuracy. See the GitHub Repo for further details on the testing data, factors and metrics. For xlm-mlm-17-1280, the test accuracy on the XNLI cross-lingual classification task in English (en), Spanish (es), German (de), Arabic (ar), and Chinese (zh): See the GitHub repo for further details. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). Conneau et al. (2020) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7). BibTeX: APA: This model card was written by the team at Hugging Face. More information needed. See the ipython notebook in the associated GitHub repo for examples. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/xlm-mlm-17-1280
### Model URL : https://huggingface.co/FacebookAI/xlm-mlm-17-1280
### Model Description : xlm-mlm-17-1280 is the XLM model, which was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau, trained on text in 17 languages. The model is a transformer pretrained using a masked language modeling (MLM) objective. The model is a language model. The model can be used for masked language modeling. To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. Also see the associated paper. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. This model is the XLM model trained on text in 17 languages. The preprocessing included tokenization and byte-pair-encoding. See the GitHub repo and the associated paper for further details on the training data and training procedure. Conneau et al. (2020) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7). The model developers evaluated the model on the XNLI cross-lingual classification task (see the XNLI data card for more details on XNLI) using the metric of test accuracy. See the GitHub Repo for further details on the testing data, factors and metrics. For xlm-mlm-17-1280, the test accuracy on the XNLI cross-lingual classification task in English (en), Spanish (es), German (de), Arabic (ar), and Chinese (zh): See the GitHub repo for further details. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). Conneau et al. (2020) report that this model has 16 layers, 1280 hidden states, 16 attention heads, and the dimension of the feed-forward layer is 1520. The vocabulary size is 200k and the total number of parameters is 570M (see Table 7). BibTeX: APA: This model card was written by the team at Hugging Face. More information needed. See the ipython notebook in the associated GitHub repo for examples. |
FacebookAI/xlm-mlm-en-2048 | https://huggingface.co/FacebookAI/xlm-mlm-en-2048 | The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau. It’s a transformer pretrained with either a causal language modeling (CLM) objective (next token prediction), a masked language modeling (MLM) objective (BERT-like), or
a Translation Language Modeling (TLM) object (extension of BERT’s MLM to multiple language inputs). This model is trained with a masked language modeling objective on English text. The model is a language model. The model can be used for masked language modeling. To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. Also see the associated paper. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed. See the associated GitHub Repo. More information needed. See the associated GitHub Repo. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. See the Hugging Face XLM docs for more examples. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/xlm-mlm-en-2048
### Model URL : https://huggingface.co/FacebookAI/xlm-mlm-en-2048
### Model Description : The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau. It’s a transformer pretrained with either a causal language modeling (CLM) objective (next token prediction), a masked language modeling (MLM) objective (BERT-like), or
a Translation Language Modeling (TLM) object (extension of BERT’s MLM to multiple language inputs). This model is trained with a masked language modeling objective on English text. The model is a language model. The model can be used for masked language modeling. To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. Also see the associated paper. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed. See the associated GitHub Repo. More information needed. See the associated GitHub Repo. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. See the Hugging Face XLM docs for more examples. |
FacebookAI/xlm-mlm-ende-1024 | https://huggingface.co/FacebookAI/xlm-mlm-ende-1024 | The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-ende-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-German. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. The model is a language model. The model can be used for masked language modeling. To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The model developers write: In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4. See the associated paper for links, citations, and further details on the training data and training procedure. The model developers also write that: If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data. See the associated GitHub Repo for further details. The model developers evaluated the model on the WMT'16 English-German dataset using the BLEU metric. See the associated paper for further details on the testing data, factors and metrics. For xlm-mlm-ende-1024 results, see Table 1 and Table 2 of the associated paper. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The model developers write: We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. More information needed. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/xlm-mlm-ende-1024
### Model URL : https://huggingface.co/FacebookAI/xlm-mlm-ende-1024
### Model Description : The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-ende-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-German. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. The model is a language model. The model can be used for masked language modeling. To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The model developers write: In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4. See the associated paper for links, citations, and further details on the training data and training procedure. The model developers also write that: If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data. See the associated GitHub Repo for further details. The model developers evaluated the model on the WMT'16 English-German dataset using the BLEU metric. See the associated paper for further details on the testing data, factors and metrics. For xlm-mlm-ende-1024 results, see Table 1 and Table 2 of the associated paper. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The model developers write: We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. More information needed. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. |
FacebookAI/xlm-mlm-enfr-1024 | https://huggingface.co/FacebookAI/xlm-mlm-enfr-1024 | The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-enfr-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-French. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. The model is a language model. The model can be used for masked language modeling. To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The model developers write: In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4. See the associated paper for links, citations, and further details on the training data and training procedure. The model developers also write that: If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data. See the associated GitHub Repo for further details. The model developers evaluated the model on the WMT'14 English-French dataset using the BLEU metric. See the associated paper for further details on the testing data, factors and metrics. For xlm-mlm-enfr-1024 results, see Table 1 and Table 2 of the associated paper. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The model developers write: We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. More information needed. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/xlm-mlm-enfr-1024
### Model URL : https://huggingface.co/FacebookAI/xlm-mlm-enfr-1024
### Model Description : The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-enfr-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-French. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. The model is a language model. The model can be used for masked language modeling. To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The model developers write: In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4. See the associated paper for links, citations, and further details on the training data and training procedure. The model developers also write that: If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data. See the associated GitHub Repo for further details. The model developers evaluated the model on the WMT'14 English-French dataset using the BLEU metric. See the associated paper for further details on the testing data, factors and metrics. For xlm-mlm-enfr-1024 results, see Table 1 and Table 2 of the associated paper. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The model developers write: We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. More information needed. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. |
FacebookAI/xlm-mlm-enro-1024 | https://huggingface.co/FacebookAI/xlm-mlm-enro-1024 | The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-enro-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-Romanian. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. The model is a language model. The model can be used for masked language modeling. To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The model developers write: In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4. See the associated paper for links, citations, and further details on the training data and training procedure. The model developers also write that: If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data. See the associated GitHub Repo for further details. The model developers evaluated the model on the WMT'16 English-Romanian dataset using the BLEU metric. See the associated paper for further details on the testing data, factors and metrics. For xlm-mlm-enro-1024 results, see Tables 1-3 of the associated paper. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The model developers write: We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. More information needed. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/xlm-mlm-enro-1024
### Model URL : https://huggingface.co/FacebookAI/xlm-mlm-enro-1024
### Model Description : The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-enro-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-Romanian. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. The model is a language model. The model can be used for masked language modeling. To learn more about this task and potential downstream uses, see the Hugging Face fill mask docs and the Hugging Face Multilingual Models for Inference docs. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. The model developers write: In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4. See the associated paper for links, citations, and further details on the training data and training procedure. The model developers also write that: If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data. See the associated GitHub Repo for further details. The model developers evaluated the model on the WMT'16 English-Romanian dataset using the BLEU metric. See the associated paper for further details on the testing data, factors and metrics. For xlm-mlm-enro-1024 results, see Tables 1-3 of the associated paper. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The model developers write: We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. More information needed. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. |
FacebookAI/xlm-mlm-tlm-xnli15-1024 | https://huggingface.co/FacebookAI/xlm-mlm-tlm-xnli15-1024 | The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-tlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective in combination with a translation language modeling (TLM) objective and then fine-tuned on the English NLI dataset. The model developers evaluated the capacity of the model to make correct predictions in all 15 XNLI languages (see the XNLI data card for further information on XNLI). The model is a language model. The model can be used for cross-lingual text classification. Though the model is fine-tuned based on English text data, the model's ability to classify sentences in 14 other languages has been evaluated (see Evaluation). This model can be used for downstream tasks related to natural language inference in different languages. For more information, see the associated paper. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Training details are culled from the associated paper. See the paper for links, citations, and further details. Also see the associated GitHub Repo for further details. The model developers write: We use WikiExtractor2 to extract raw sentences from Wikipedia dumps and use them as mono-lingual data for the CLM and MLM objectives. For the TLM objective, we only use parallel data that involves English, similar to Conneau et al. (2018b). For fine-tuning, the developers used the English NLI dataset (see the XNLI data card). The model developers write: We use fastBPE to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all languages, following the method presented in Section 3.1. The model developers write: We use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam optimizer (Kingma and Ba, 2014), a linear warm-up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4. For the CLM and MLM objectives, we use streams of 256 tokens and a mini-batches of size 64. Unlike Devlin et al. (2018), a sequence in a mini-batch can contain more than two consecutive sentences, as explained in Section 3.2. For the TLM objective, we sample mini-batches of 4000 tokens composed of sentences with similar lengths. We use the averaged perplexity over languages as a stopping criterion for training. For machine translation, we only use 6 layers, and we create mini-batches of 2000 tokens. When fine-tuning on XNLI, we use mini-batches of size 8 or 16, and we clip the sentence length to 256 words. We use 80k BPE splits and a vocabulary of 95k and train a 12-layer model on the Wikipedias of the XNLI languages. We sample the learning rate of the Adam optimizer with values from 5.10−4 to 2.10−4, and use small evaluation epochs of 20000 random samples. We use the first hidden state of the last layer of the transformer as input to the randomly initialized final linear classifier, and fine-tune all parameters. In our experiments, using either max-pooling or mean-pooling over the last layer did not work bet- ter than using the first hidden state. We implement all our models in Py-Torch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. After fine-tuning the model on the English NLI dataset, the model developers evaluated the capacity of the model to make correct predictions in the 15 XNLI languages using the XNLI data and the metric of test accuracy.See the associated paper for further details. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). Details are culled from the associated paper. See the paper for links, citations, and further details. Also see the associated GitHub Repo for further details. xlm-mlm-tlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective in combination with a translation language modeling (TLM) objective and then fine-tuned on the English NLI dataset. About the TLM objective, the developers write: We introduce a new translation language modeling (TLM) objective for improving cross-lingual pretraining. Our TLM objective is an extension of MLM, where instead of considering monolingual text streams, we concatenate parallel sentences as illustrated in Figure 1. We randomly mask words in both the source and target sentences. To predict a word masked in an English sentence, the model can either attend to surrounding English words or to the French translation, encouraging the model to align the English and French representations. The developers write: We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. BibTeX: APA: This model card was written by the team at Hugging Face. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/xlm-mlm-tlm-xnli15-1024
### Model URL : https://huggingface.co/FacebookAI/xlm-mlm-tlm-xnli15-1024
### Model Description : The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-tlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective in combination with a translation language modeling (TLM) objective and then fine-tuned on the English NLI dataset. The model developers evaluated the capacity of the model to make correct predictions in all 15 XNLI languages (see the XNLI data card for further information on XNLI). The model is a language model. The model can be used for cross-lingual text classification. Though the model is fine-tuned based on English text data, the model's ability to classify sentences in 14 other languages has been evaluated (see Evaluation). This model can be used for downstream tasks related to natural language inference in different languages. For more information, see the associated paper. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Training details are culled from the associated paper. See the paper for links, citations, and further details. Also see the associated GitHub Repo for further details. The model developers write: We use WikiExtractor2 to extract raw sentences from Wikipedia dumps and use them as mono-lingual data for the CLM and MLM objectives. For the TLM objective, we only use parallel data that involves English, similar to Conneau et al. (2018b). For fine-tuning, the developers used the English NLI dataset (see the XNLI data card). The model developers write: We use fastBPE to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all languages, following the method presented in Section 3.1. The model developers write: We use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam optimizer (Kingma and Ba, 2014), a linear warm-up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4. For the CLM and MLM objectives, we use streams of 256 tokens and a mini-batches of size 64. Unlike Devlin et al. (2018), a sequence in a mini-batch can contain more than two consecutive sentences, as explained in Section 3.2. For the TLM objective, we sample mini-batches of 4000 tokens composed of sentences with similar lengths. We use the averaged perplexity over languages as a stopping criterion for training. For machine translation, we only use 6 layers, and we create mini-batches of 2000 tokens. When fine-tuning on XNLI, we use mini-batches of size 8 or 16, and we clip the sentence length to 256 words. We use 80k BPE splits and a vocabulary of 95k and train a 12-layer model on the Wikipedias of the XNLI languages. We sample the learning rate of the Adam optimizer with values from 5.10−4 to 2.10−4, and use small evaluation epochs of 20000 random samples. We use the first hidden state of the last layer of the transformer as input to the randomly initialized final linear classifier, and fine-tune all parameters. In our experiments, using either max-pooling or mean-pooling over the last layer did not work bet- ter than using the first hidden state. We implement all our models in Py-Torch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. After fine-tuning the model on the English NLI dataset, the model developers evaluated the capacity of the model to make correct predictions in the 15 XNLI languages using the XNLI data and the metric of test accuracy.See the associated paper for further details. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). Details are culled from the associated paper. See the paper for links, citations, and further details. Also see the associated GitHub Repo for further details. xlm-mlm-tlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective in combination with a translation language modeling (TLM) objective and then fine-tuned on the English NLI dataset. About the TLM objective, the developers write: We introduce a new translation language modeling (TLM) objective for improving cross-lingual pretraining. Our TLM objective is an extension of MLM, where instead of considering monolingual text streams, we concatenate parallel sentences as illustrated in Figure 1. We randomly mask words in both the source and target sentences. To predict a word masked in an English sentence, the model can either attend to surrounding English words or to the French translation, encouraging the model to align the English and French representations. The developers write: We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. BibTeX: APA: This model card was written by the team at Hugging Face. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. |
FacebookAI/xlm-mlm-xnli15-1024 | https://huggingface.co/FacebookAI/xlm-mlm-xnli15-1024 | The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective fine-tuned on the English NLI dataset. The model developers evaluated the capacity of the model to make correct predictions in all 15 XNLI languages (see the XNLI data card for further information on XNLI). The model is a language model. The model can be used for cross-lingual text classification. Though the model is fine-tuned based on English text data, the model's ability to classify sentences in 14 other languages has been evaluated (see Evaluation). This model can be used for downstream tasks related to natural language inference in different languages. For more information, see the associated paper. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Training details are culled from the associated paper. See the paper for links, citations, and further details. Also see the associated GitHub Repo for further details. The model developers write: We use WikiExtractor2 to extract raw sentences from Wikipedia dumps and use them as mono-lingual data for the CLM and MLM objectives. For the TLM objective, we only use parallel data that involves English, similar to Conneau et al. (2018b). For fine-tuning, the developers used the English NLI dataset (see the XNLI data card). The model developers write: We use fastBPE to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all languages, following the method presented in Section 3.1. The model developers write: We use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam optimizer (Kingma and Ba, 2014), a linear warm-up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4. For the CLM and MLM objectives, we use streams of 256 tokens and a mini-batches of size 64. Unlike Devlin et al. (2018), a sequence in a mini-batch can contain more than two consecutive sentences, as explained in Section 3.2. For the TLM objective, we sample mini-batches of 4000 tokens composed of sentences with similar lengths. We use the averaged perplexity over languages as a stopping criterion for training. For machine translation, we only use 6 layers, and we create mini-batches of 2000 tokens. When fine-tuning on XNLI, we use mini-batches of size 8 or 16, and we clip the sentence length to 256 words. We use 80k BPE splits and a vocabulary of 95k and train a 12-layer model on the Wikipedias of the XNLI languages. We sample the learning rate of the Adam optimizer with values from 5.10−4 to 2.10−4, and use small evaluation epochs of 20000 random samples. We use the first hidden state of the last layer of the transformer as input to the randomly initialized final linear classifier, and fine-tune all parameters. In our experiments, using either max-pooling or mean-pooling over the last layer did not work bet- ter than using the first hidden state. We implement all our models in Py-Torch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. After fine-tuning the model on the English NLI dataset, the model developers evaluated the capacity of the model to make correct predictions in the 15 XNLI languages using the XNLI data and the metric of test accuracy.See the associated paper for further details. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). Details are culled from the associated paper. See the paper for links, citations, and further details. Also see the associated GitHub Repo for further details. xlm-mlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective fine-tuned on the English NLI dataset. About the MLM objective, the developers write: We also consider the masked language model- ing (MLM) objective of Devlin et al. (2018), also known as the Cloze task (Taylor, 1953). Follow- ing Devlin et al. (2018), we sample randomly 15% of the BPE tokens from the text streams, replace them by a [MASK] token 80% of the time, by a random token 10% of the time, and we keep them unchanged 10% of the time. Differences be- tween our approach and the MLM of Devlin et al. (2018) include the use of text streams of an ar- bitrary number of sentences (truncated at 256 to- kens) instead of pairs of sentences. To counter the imbalance between rare and frequent tokens (e.g. punctuations or stop words), we also subsample the frequent outputs using an approach similar to Mikolov et al. (2013b): tokens in a text stream are sampled according to a multinomial distribution, whose weights are proportional to the square root of their invert frequencies. Our MLM objective is illustrated in Figure 1. The developers write: We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. BibTeX: APA: This model card was written by the team at Hugging Face. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/xlm-mlm-xnli15-1024
### Model URL : https://huggingface.co/FacebookAI/xlm-mlm-xnli15-1024
### Model Description : The XLM model was proposed in Cross-lingual Language Model Pretraining by Guillaume Lample, Alexis Conneau. xlm-mlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective fine-tuned on the English NLI dataset. The model developers evaluated the capacity of the model to make correct predictions in all 15 XNLI languages (see the XNLI data card for further information on XNLI). The model is a language model. The model can be used for cross-lingual text classification. Though the model is fine-tuned based on English text data, the model's ability to classify sentences in 14 other languages has been evaluated (see Evaluation). This model can be used for downstream tasks related to natural language inference in different languages. For more information, see the associated paper. The model should not be used to intentionally create hostile or alienating environments for people. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Training details are culled from the associated paper. See the paper for links, citations, and further details. Also see the associated GitHub Repo for further details. The model developers write: We use WikiExtractor2 to extract raw sentences from Wikipedia dumps and use them as mono-lingual data for the CLM and MLM objectives. For the TLM objective, we only use parallel data that involves English, similar to Conneau et al. (2018b). For fine-tuning, the developers used the English NLI dataset (see the XNLI data card). The model developers write: We use fastBPE to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all languages, following the method presented in Section 3.1. The model developers write: We use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam optimizer (Kingma and Ba, 2014), a linear warm-up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4. For the CLM and MLM objectives, we use streams of 256 tokens and a mini-batches of size 64. Unlike Devlin et al. (2018), a sequence in a mini-batch can contain more than two consecutive sentences, as explained in Section 3.2. For the TLM objective, we sample mini-batches of 4000 tokens composed of sentences with similar lengths. We use the averaged perplexity over languages as a stopping criterion for training. For machine translation, we only use 6 layers, and we create mini-batches of 2000 tokens. When fine-tuning on XNLI, we use mini-batches of size 8 or 16, and we clip the sentence length to 256 words. We use 80k BPE splits and a vocabulary of 95k and train a 12-layer model on the Wikipedias of the XNLI languages. We sample the learning rate of the Adam optimizer with values from 5.10−4 to 2.10−4, and use small evaluation epochs of 20000 random samples. We use the first hidden state of the last layer of the transformer as input to the randomly initialized final linear classifier, and fine-tune all parameters. In our experiments, using either max-pooling or mean-pooling over the last layer did not work bet- ter than using the first hidden state. We implement all our models in Py-Torch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. After fine-tuning the model on the English NLI dataset, the model developers evaluated the capacity of the model to make correct predictions in the 15 XNLI languages using the XNLI data and the metric of test accuracy.See the associated paper for further details. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). Details are culled from the associated paper. See the paper for links, citations, and further details. Also see the associated GitHub Repo for further details. xlm-mlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective fine-tuned on the English NLI dataset. About the MLM objective, the developers write: We also consider the masked language model- ing (MLM) objective of Devlin et al. (2018), also known as the Cloze task (Taylor, 1953). Follow- ing Devlin et al. (2018), we sample randomly 15% of the BPE tokens from the text streams, replace them by a [MASK] token 80% of the time, by a random token 10% of the time, and we keep them unchanged 10% of the time. Differences be- tween our approach and the MLM of Devlin et al. (2018) include the use of text streams of an ar- bitrary number of sentences (truncated at 256 to- kens) instead of pairs of sentences. To counter the imbalance between rare and frequent tokens (e.g. punctuations or stop words), we also subsample the frequent outputs using an approach similar to Mikolov et al. (2013b): tokens in a text stream are sampled according to a multinomial distribution, whose weights are proportional to the square root of their invert frequencies. Our MLM objective is illustrated in Figure 1. The developers write: We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. BibTeX: APA: This model card was written by the team at Hugging Face. This model uses language embeddings to specify the language used at inference. See the Hugging Face Multilingual Models for Inference docs for further details. |
FacebookAI/xlm-roberta-base | https://huggingface.co/FacebookAI/xlm-roberta-base | XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. and first released in this repository. Disclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team. XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs. You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/xlm-roberta-base
### Model URL : https://huggingface.co/FacebookAI/xlm-roberta-base
### Model Description : XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. and first released in this repository. Disclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team. XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs. You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: |
FacebookAI/xlm-roberta-large-finetuned-conll02-dutch | https://huggingface.co/FacebookAI/xlm-roberta-large-finetuned-conll02-dutch | The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the CoNLL-2002 dataset in Dutch. The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text. Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs. The model should not be used to intentionally create hostile or alienating environments for people. CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. See the following resources for training data and training procedure details: See the associated paper for evaluation details. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. You can use this model directly within a pipeline for NER. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/xlm-roberta-large-finetuned-conll02-dutch
### Model URL : https://huggingface.co/FacebookAI/xlm-roberta-large-finetuned-conll02-dutch
### Model Description : The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the CoNLL-2002 dataset in Dutch. The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text. Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs. The model should not be used to intentionally create hostile or alienating environments for people. CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. See the following resources for training data and training procedure details: See the associated paper for evaluation details. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. You can use this model directly within a pipeline for NER. |
FacebookAI/xlm-roberta-large-finetuned-conll02-spanish | https://huggingface.co/FacebookAI/xlm-roberta-large-finetuned-conll02-spanish | The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the CoNLL-2002 dataset in Spanish. The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text. Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs. The model should not be used to intentionally create hostile or alienating environments for people. CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. See the following resources for training data and training procedure details: See the associated paper for evaluation details. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. You can use this model directly within a pipeline for NER. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/xlm-roberta-large-finetuned-conll02-spanish
### Model URL : https://huggingface.co/FacebookAI/xlm-roberta-large-finetuned-conll02-spanish
### Model Description : The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the CoNLL-2002 dataset in Spanish. The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text. Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs. The model should not be used to intentionally create hostile or alienating environments for people. CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. See the following resources for training data and training procedure details: See the associated paper for evaluation details. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. You can use this model directly within a pipeline for NER. |
FacebookAI/xlm-roberta-large-finetuned-conll03-english | https://huggingface.co/FacebookAI/xlm-roberta-large-finetuned-conll03-english | The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the conll2003 dataset in English. The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text. Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs. The model should not be used to intentionally create hostile or alienating environments for people. CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). In the context of tasks relevant to this model, Mishra et al. (2020) explore social biases in NER systems for English and find that there is systematic bias in existing NER systems in that they fail to identify named entities from different demographic groups (though this paper did not look at BERT). For example, using a sample sentence from Mishra et al. (2020): Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. See the following resources for training data and training procedure details: See the associated paper for evaluation details. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. You can use this model directly within a pipeline for NER. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/xlm-roberta-large-finetuned-conll03-english
### Model URL : https://huggingface.co/FacebookAI/xlm-roberta-large-finetuned-conll03-english
### Model Description : The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the conll2003 dataset in English. The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text. Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs. The model should not be used to intentionally create hostile or alienating environments for people. CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). In the context of tasks relevant to this model, Mishra et al. (2020) explore social biases in NER systems for English and find that there is systematic bias in existing NER systems in that they fail to identify named entities from different demographic groups (though this paper did not look at BERT). For example, using a sample sentence from Mishra et al. (2020): Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. See the following resources for training data and training procedure details: See the associated paper for evaluation details. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. You can use this model directly within a pipeline for NER. |
FacebookAI/xlm-roberta-large-finetuned-conll03-german | https://huggingface.co/FacebookAI/xlm-roberta-large-finetuned-conll03-german | The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the conll2003 dataset in German. The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text. Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs. The model should not be used to intentionally create hostile or alienating environments for people. CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. See the following resources for training data and training procedure details: See the associated paper for evaluation details. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. You can use this model directly within a pipeline for NER. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/xlm-roberta-large-finetuned-conll03-german
### Model URL : https://huggingface.co/FacebookAI/xlm-roberta-large-finetuned-conll03-german
### Model Description : The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is XLM-RoBERTa-large fine-tuned with the conll2003 dataset in German. The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text. Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face token classification docs. The model should not be used to intentionally create hostile or alienating environments for people. CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. See the following resources for training data and training procedure details: See the associated paper for evaluation details. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). See the associated paper for further details. BibTeX: APA: This model card was written by the team at Hugging Face. Use the code below to get started with the model. You can use this model directly within a pipeline for NER. |
FacebookAI/xlm-roberta-large | https://huggingface.co/FacebookAI/xlm-roberta-large | XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. and first released in this repository. Disclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team. XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs. You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FacebookAI/xlm-roberta-large
### Model URL : https://huggingface.co/FacebookAI/xlm-roberta-large
### Model Description : XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. and first released in this repository. Disclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team. XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs. You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: |
xlnet/xlnet-base-cased | https://huggingface.co/xlnet/xlnet-base-cased | XLNet model pre-trained on English language. It was introduced in the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding by Yang et al. and first released in this repository. Disclaimer: The team releasing XLNet did not write a model card for this model so this model card has been written by the Hugging Face team. XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking. The model is mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2. Here is how to use this model to get the features of a given text in PyTorch: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : xlnet/xlnet-base-cased
### Model URL : https://huggingface.co/xlnet/xlnet-base-cased
### Model Description : XLNet model pre-trained on English language. It was introduced in the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding by Yang et al. and first released in this repository. Disclaimer: The team releasing XLNet did not write a model card for this model so this model card has been written by the Hugging Face team. XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking. The model is mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2. Here is how to use this model to get the features of a given text in PyTorch: |
xlnet/xlnet-large-cased | https://huggingface.co/xlnet/xlnet-large-cased | XLNet model pre-trained on English language. It was introduced in the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding by Yang et al. and first released in this repository. Disclaimer: The team releasing XLNet did not write a model card for this model so this model card has been written by the Hugging Face team. XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking. The model is mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2. Here is how to use this model to get the features of a given text in PyTorch: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : xlnet/xlnet-large-cased
### Model URL : https://huggingface.co/xlnet/xlnet-large-cased
### Model Description : XLNet model pre-trained on English language. It was introduced in the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding by Yang et al. and first released in this repository. Disclaimer: The team releasing XLNet did not write a model card for this model so this model card has been written by the Hugging Face team. XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking. The model is mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2. Here is how to use this model to get the features of a given text in PyTorch: |
007J/smile | https://huggingface.co/007J/smile | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 007J/smile
### Model URL : https://huggingface.co/007J/smile
### Model Description : No model card New: Create and edit this model card directly on the website! |
0307061430/xuangou | https://huggingface.co/0307061430/xuangou | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 0307061430/xuangou
### Model URL : https://huggingface.co/0307061430/xuangou
### Model Description : No model card New: Create and edit this model card directly on the website! |
09panesara/distilbert-base-uncased-finetuned-cola | https://huggingface.co/09panesara/distilbert-base-uncased-finetuned-cola | This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 09panesara/distilbert-base-uncased-finetuned-cola
### Model URL : https://huggingface.co/09panesara/distilbert-base-uncased-finetuned-cola
### Model Description : This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: |
0x7o/keyt5-base | https://huggingface.co/0x7o/keyt5-base |
Supported languages: ru Github - text2keywords Pretraining Large version
|
Pretraining Base version Example usage (the code returns a list with keywords. duplicates are possible): Go to the training notebook and learn more about it: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 0x7o/keyt5-base
### Model URL : https://huggingface.co/0x7o/keyt5-base
### Model Description :
Supported languages: ru Github - text2keywords Pretraining Large version
|
Pretraining Base version Example usage (the code returns a list with keywords. duplicates are possible): Go to the training notebook and learn more about it: |
0x7o/keyt5-large | https://huggingface.co/0x7o/keyt5-large |
Supported languages: ru Github - text2keywords Pretraining Large version
|
Pretraining Base version Example usage (the code returns a list with keywords. duplicates are possible): Go to the training notebook and learn more about it: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 0x7o/keyt5-large
### Model URL : https://huggingface.co/0x7o/keyt5-large
### Model Description :
Supported languages: ru Github - text2keywords Pretraining Large version
|
Pretraining Base version Example usage (the code returns a list with keywords. duplicates are possible): Go to the training notebook and learn more about it: |
0xDEADBEA7/DialoGPT-small-rick | https://huggingface.co/0xDEADBEA7/DialoGPT-small-rick | null | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 0xDEADBEA7/DialoGPT-small-rick
### Model URL : https://huggingface.co/0xDEADBEA7/DialoGPT-small-rick
### Model Description : |
123123/ghfk | https://huggingface.co/123123/ghfk | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 123123/ghfk
### Model URL : https://huggingface.co/123123/ghfk
### Model Description : No model card New: Create and edit this model card directly on the website! |
123456/Arcanegan | https://huggingface.co/123456/Arcanegan | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 123456/Arcanegan
### Model URL : https://huggingface.co/123456/Arcanegan
### Model Description : No model card New: Create and edit this model card directly on the website! |
1234567/1234567 | https://huggingface.co/1234567/1234567 | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 1234567/1234567
### Model URL : https://huggingface.co/1234567/1234567
### Model Description : No model card New: Create and edit this model card directly on the website! |
123abhiALFLKFO/albert-base-v2-finetuned-sst2 | https://huggingface.co/123abhiALFLKFO/albert-base-v2-finetuned-sst2 | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 123abhiALFLKFO/albert-base-v2-finetuned-sst2
### Model URL : https://huggingface.co/123abhiALFLKFO/albert-base-v2-finetuned-sst2
### Model Description : No model card New: Create and edit this model card directly on the website! |
123abhiALFLKFO/albert-base-v2-yelp-polarity-finetuned-sst2 | https://huggingface.co/123abhiALFLKFO/albert-base-v2-yelp-polarity-finetuned-sst2 | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 123abhiALFLKFO/albert-base-v2-yelp-polarity-finetuned-sst2
### Model URL : https://huggingface.co/123abhiALFLKFO/albert-base-v2-yelp-polarity-finetuned-sst2
### Model Description : No model card New: Create and edit this model card directly on the website! |
123abhiALFLKFO/distilbert-base-uncased-finetuned-cola | https://huggingface.co/123abhiALFLKFO/distilbert-base-uncased-finetuned-cola | This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 123abhiALFLKFO/distilbert-base-uncased-finetuned-cola
### Model URL : https://huggingface.co/123abhiALFLKFO/distilbert-base-uncased-finetuned-cola
### Model Description : This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: |
123addfg/ar | https://huggingface.co/123addfg/ar | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 123addfg/ar
### Model URL : https://huggingface.co/123addfg/ar
### Model Description : No model card New: Create and edit this model card directly on the website! |
123www/test_model | https://huggingface.co/123www/test_model | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 123www/test_model
### Model URL : https://huggingface.co/123www/test_model
### Model Description : No model card New: Create and edit this model card directly on the website! |
13048909972/wav2vec2-common_voice-tr-demo | https://huggingface.co/13048909972/wav2vec2-common_voice-tr-demo | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 13048909972/wav2vec2-common_voice-tr-demo
### Model URL : https://huggingface.co/13048909972/wav2vec2-common_voice-tr-demo
### Model Description : No model card New: Create and edit this model card directly on the website! |
13048909972/wav2vec2-large-xls-r-300m-tr-colab | https://huggingface.co/13048909972/wav2vec2-large-xls-r-300m-tr-colab | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 13048909972/wav2vec2-large-xls-r-300m-tr-colab
### Model URL : https://huggingface.co/13048909972/wav2vec2-large-xls-r-300m-tr-colab
### Model Description : No model card New: Create and edit this model card directly on the website! |
13048909972/wav2vec2-large-xlsr-53_common_voice_20211210112254 | https://huggingface.co/13048909972/wav2vec2-large-xlsr-53_common_voice_20211210112254 | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 13048909972/wav2vec2-large-xlsr-53_common_voice_20211210112254
### Model URL : https://huggingface.co/13048909972/wav2vec2-large-xlsr-53_common_voice_20211210112254
### Model Description : No model card New: Create and edit this model card directly on the website! |
13048909972/wav2vec2-large-xlsr-53_common_voice_20211211085606 | https://huggingface.co/13048909972/wav2vec2-large-xlsr-53_common_voice_20211211085606 | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 13048909972/wav2vec2-large-xlsr-53_common_voice_20211211085606
### Model URL : https://huggingface.co/13048909972/wav2vec2-large-xlsr-53_common_voice_20211211085606
### Model Description : No model card New: Create and edit this model card directly on the website! |
13306330378/huiqi_model | https://huggingface.co/13306330378/huiqi_model | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 13306330378/huiqi_model
### Model URL : https://huggingface.co/13306330378/huiqi_model
### Model Description : No model card New: Create and edit this model card directly on the website! |
13on/gpt2-wishes | https://huggingface.co/13on/gpt2-wishes | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 13on/gpt2-wishes
### Model URL : https://huggingface.co/13on/gpt2-wishes
### Model Description : No model card New: Create and edit this model card directly on the website! |
13on/kw2t-wishes | https://huggingface.co/13on/kw2t-wishes | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 13on/kw2t-wishes
### Model URL : https://huggingface.co/13on/kw2t-wishes
### Model Description : No model card New: Create and edit this model card directly on the website! |
13onn/gpt2-wishes-2 | https://huggingface.co/13onn/gpt2-wishes-2 | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 13onn/gpt2-wishes-2
### Model URL : https://huggingface.co/13onn/gpt2-wishes-2
### Model Description : No model card New: Create and edit this model card directly on the website! |
13onnn/gpt2-wish | https://huggingface.co/13onnn/gpt2-wish | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 13onnn/gpt2-wish
### Model URL : https://huggingface.co/13onnn/gpt2-wish
### Model Description : No model card New: Create and edit this model card directly on the website! |
1503277708/namo | https://huggingface.co/1503277708/namo | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 1503277708/namo
### Model URL : https://huggingface.co/1503277708/namo
### Model Description : No model card New: Create and edit this model card directly on the website! |
1575/7447 | https://huggingface.co/1575/7447 | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 1575/7447
### Model URL : https://huggingface.co/1575/7447
### Model Description : No model card New: Create and edit this model card directly on the website! |
1712871/manual_vn_electra_small | https://huggingface.co/1712871/manual_vn_electra_small | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 1712871/manual_vn_electra_small
### Model URL : https://huggingface.co/1712871/manual_vn_electra_small
### Model Description : No model card New: Create and edit this model card directly on the website! |
1757968399/tinybert_4_312_1200 | https://huggingface.co/1757968399/tinybert_4_312_1200 | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 1757968399/tinybert_4_312_1200
### Model URL : https://huggingface.co/1757968399/tinybert_4_312_1200
### Model Description : No model card New: Create and edit this model card directly on the website! |
17luke/wav2vec2-large-xls-r-300m-icelandic-samromur | https://huggingface.co/17luke/wav2vec2-large-xls-r-300m-icelandic-samromur | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 17luke/wav2vec2-large-xls-r-300m-icelandic-samromur
### Model URL : https://huggingface.co/17luke/wav2vec2-large-xls-r-300m-icelandic-samromur
### Model Description : No model card New: Create and edit this model card directly on the website! |
18811449050/bert_cn_finetuning | https://huggingface.co/18811449050/bert_cn_finetuning | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 18811449050/bert_cn_finetuning
### Model URL : https://huggingface.co/18811449050/bert_cn_finetuning
### Model Description : No model card New: Create and edit this model card directly on the website! |
18811449050/bert_finetuning_test | https://huggingface.co/18811449050/bert_finetuning_test | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 18811449050/bert_finetuning_test
### Model URL : https://huggingface.co/18811449050/bert_finetuning_test
### Model Description : No model card New: Create and edit this model card directly on the website! |
1Basco/DialoGPT-small-jake | https://huggingface.co/1Basco/DialoGPT-small-jake | #Jake Peralta DialoGPT Model | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 1Basco/DialoGPT-small-jake
### Model URL : https://huggingface.co/1Basco/DialoGPT-small-jake
### Model Description : #Jake Peralta DialoGPT Model |
1n3skh/idk | https://huggingface.co/1n3skh/idk | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : 1n3skh/idk
### Model URL : https://huggingface.co/1n3skh/idk
### Model Description : No model card New: Create and edit this model card directly on the website! |