Update README
Browse files
README.md
CHANGED
@@ -19,7 +19,7 @@ The other (non-default) version which can be used is:
|
|
19 |
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
|
20 |
the Hugging Face team and contributors.
|
21 |
|
22 |
-
|
23 |
|
24 |
Size | Reset | Dev Accuracy | Link
|
25 |
-------- | --------| -------- | ----
|
@@ -38,8 +38,6 @@ TINY | reset | 0.2375 | [tapas-tiny-finetuned-sqa](https://huggingface.co/google
|
|
38 |
|
39 |
## Model description
|
40 |
|
41 |
-
## Model description
|
42 |
-
|
43 |
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
|
44 |
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
|
45 |
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
|
|
|
19 |
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
|
20 |
the Hugging Face team and contributors.
|
21 |
|
22 |
+
## Results on SQA - Dev Accuracy
|
23 |
|
24 |
Size | Reset | Dev Accuracy | Link
|
25 |
-------- | --------| -------- | ----
|
|
|
38 |
|
39 |
## Model description
|
40 |
|
|
|
|
|
41 |
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
|
42 |
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
|
43 |
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
|