Update model description
Browse files
README.md
CHANGED
@@ -1,11 +1,10 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
tags:
|
4 |
- generated_from_trainer
|
5 |
datasets:
|
6 |
- squad
|
7 |
model-index:
|
8 |
-
- name: roberta-base-squad
|
9 |
results: []
|
10 |
---
|
11 |
|
@@ -14,6 +13,15 @@ model-index:
|
|
14 |
|
15 |
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
## Training and evaluation data
|
18 |
|
19 |
Trained and evaluated on the [squad dataset](https://huggingface.co/datasets/squad).
|
|
|
1 |
---
|
|
|
2 |
tags:
|
3 |
- generated_from_trainer
|
4 |
datasets:
|
5 |
- squad
|
6 |
model-index:
|
7 |
+
- name: Graphcore/roberta-base-squad
|
8 |
results: []
|
9 |
---
|
10 |
|
|
|
13 |
|
14 |
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
|
15 |
|
16 |
+
## Model description
|
17 |
+
RoBERTa is based on BERT pretrain approach but it t evaluates carefully a number of design decisions of BERT pretraining approach so that it found it is undertrained.
|
18 |
+
|
19 |
+
It suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing mask pattern applied to the training data.
|
20 |
+
|
21 |
+
As a result, it achieves state-of-the-art results on GLUE, RACE and SQuAD and so on on.
|
22 |
+
|
23 |
+
Paper link : [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/pdf/1907.11692.pdf)
|
24 |
+
|
25 |
## Training and evaluation data
|
26 |
|
27 |
Trained and evaluated on the [squad dataset](https://huggingface.co/datasets/squad).
|