|
--- |
|
license: bigscience-bloom-rail-1.0 |
|
datasets: |
|
- togethercomputer/RedPajama-Data-V2 |
|
language: |
|
- ab |
|
metrics: |
|
- bertscore |
|
tags: |
|
- finance |
|
--- |
|
fdg |
|
t'y'r't'y |
|
fgh |
|
324 |
|
455 |
|
|
|
|
|
# RoBERTa base model |
|
|
|
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in |
|
[this paper](https://arxiv.org/abs/1907.11692) and first released in |
|
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it |
|
makes a difference between english and English. |
|
|
|
Disclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by |
|
the Hugging Face team. |
|
|
|
## Model description |
|
|
|
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means |
|
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of |
|
publicly available data) with an automatic process to generate inputs and labels from those texts. |
|
|
|
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model |
|
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict |
|
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one |
|
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to |
|
learn a bidirectional representation of the sentence. |
|
2334 |
|
This way, the model learns an inner representation of the English language that can then be used to extract features |
|
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard |
|
classifier using the features produced by the BERT model as inputs. |
|
|
|
324 |