saattrupdan
commited on
Commit
·
458e8ed
1
Parent(s):
adcff3d
Update README.md
Browse files
README.md
CHANGED
@@ -1,19 +1,21 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
3 |
tags:
|
4 |
- generated_from_trainer
|
5 |
model-index:
|
6 |
- name: verdict-classifier-en
|
7 |
-
results:
|
|
|
|
|
|
|
|
|
|
|
8 |
---
|
9 |
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
# verdict-classifier-en
|
14 |
-
|
15 |
-
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
|
16 |
-
It achieves the following results on the evaluation set:
|
17 |
- Loss: 0.1304
|
18 |
- F1 Macro: 0.8868
|
19 |
- F1 Misinformation: 0.9832
|
@@ -24,17 +26,6 @@ It achieves the following results on the evaluation set:
|
|
24 |
- Prec Factual: 0.9783
|
25 |
- Prec Other: 0.6038
|
26 |
|
27 |
-
## Model description
|
28 |
-
|
29 |
-
More information needed
|
30 |
-
|
31 |
-
## Intended uses & limitations
|
32 |
-
|
33 |
-
More information needed
|
34 |
-
|
35 |
-
## Training and evaluation data
|
36 |
-
|
37 |
-
More information needed
|
38 |
|
39 |
## Training procedure
|
40 |
|
@@ -90,4 +81,4 @@ The following hyperparameters were used during training:
|
|
90 |
- Transformers 4.11.3
|
91 |
- Pytorch 1.9.0+cu102
|
92 |
- Datasets 1.9.0
|
93 |
-
- Tokenizers 0.10.2
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
language: en
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
model-index:
|
7 |
- name: verdict-classifier-en
|
8 |
+
results:
|
9 |
+
- task:
|
10 |
+
type: text-classification
|
11 |
+
name: Verdict Classification
|
12 |
+
widget:
|
13 |
+
- "One might think that this is true, but it's taken out of context."
|
14 |
---
|
15 |
|
16 |
+
# English Verdict Classifier
|
17 |
+
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on 2,500 deduplicated verdicts from [Google Fact Check Tools API](https://developers.google.com/fact-check/tools/api/reference/rest/v1alpha1/claims/search), translated into English with the [Google Cloud Translation API](https://cloud.google.com/translate/docs/reference/rest/).
|
18 |
+
It achieves the following results on the evaluation set, being 1,000 such verdicts translated into English, but here including duplicates to represent the true distribution:
|
|
|
|
|
|
|
|
|
19 |
- Loss: 0.1304
|
20 |
- F1 Macro: 0.8868
|
21 |
- F1 Misinformation: 0.9832
|
|
|
26 |
- Prec Factual: 0.9783
|
27 |
- Prec Other: 0.6038
|
28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
## Training procedure
|
31 |
|
|
|
81 |
- Transformers 4.11.3
|
82 |
- Pytorch 1.9.0+cu102
|
83 |
- Datasets 1.9.0
|
84 |
+
- Tokenizers 0.10.2
|