internetoftim commited on
Commit
270a133
1 Parent(s): 339d1eb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -6
README.md CHANGED
@@ -10,22 +10,34 @@ model-index:
10
  ---
11
 
12
 
13
- # roberta-base-squad
 
 
 
 
 
 
14
 
15
- This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
16
 
17
  ## Model description
18
- RoBERTa is based on BERT pretrain approach but it t evaluates carefully a number of design decisions of BERT pretraining approach so that it found it is undertrained.
19
 
20
- It suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing mask pattern applied to the training data.
 
 
 
 
21
 
22
- As a result, it achieves state-of-the-art results on GLUE, RACE and SQuAD and so on on.
23
 
24
  Paper link : [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/pdf/1907.11692.pdf)
25
 
 
 
 
 
26
  ## Training and evaluation data
27
 
28
- Trained and evaluated on the [squad dataset](https://huggingface.co/datasets/squad).
 
29
 
30
  ## Training procedure
31
 
 
10
  ---
11
 
12
 
13
+ # Graphcore/roberta-base-squad
14
+
15
+ BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM.
16
+
17
+ It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations.
18
+
19
+ It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
20
 
 
21
 
22
  ## Model description
 
23
 
24
+ RoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained.
25
+
26
+ It suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data.
27
+
28
+ As a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD.
29
 
 
30
 
31
  Paper link : [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/pdf/1907.11692.pdf)
32
 
33
+ ## Intended uses & limitations
34
+
35
+ This model is a fine-tuned version of [HuggingFace/roberta-base](https://huggingface.co/roberta-base) on the SQuAD dataset.
36
+
37
  ## Training and evaluation data
38
 
39
+ Trained and evaluated on the SQuAD dataset:
40
+ - [HuggingFace/squad ](https://huggingface.co/datasets/squad).
41
 
42
  ## Training procedure
43