model update
Browse files
README.md
CHANGED
@@ -102,7 +102,7 @@ model-index:
|
|
102 |
metrics:
|
103 |
- name: Accuracy
|
104 |
type: accuracy
|
105 |
-
value: 0.
|
106 |
- task:
|
107 |
name: Analogy Questions (NELL-ONE Analogy)
|
108 |
type: multiple-choice-qa
|
@@ -198,7 +198,7 @@ This model achieves the following results on the relation understanding tasks:
|
|
198 |
- Accuracy on U4: 0.6157407407407407
|
199 |
- Accuracy on Google: 0.934
|
200 |
- Accuracy on ConceptNet Analogy: 0.3674496644295302
|
201 |
-
- Accuracy on T-Rex Analogy: 0.
|
202 |
- Accuracy on NELL-ONE Analogy: 0.61
|
203 |
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-e-semeval2012/raw/main/classification.json)):
|
204 |
- Micro F1 score on BLESS: 0.9213500075335241
|
|
|
102 |
metrics:
|
103 |
- name: Accuracy
|
104 |
type: accuracy
|
105 |
+
value: 0.5191256830601093
|
106 |
- task:
|
107 |
name: Analogy Questions (NELL-ONE Analogy)
|
108 |
type: multiple-choice-qa
|
|
|
198 |
- Accuracy on U4: 0.6157407407407407
|
199 |
- Accuracy on Google: 0.934
|
200 |
- Accuracy on ConceptNet Analogy: 0.3674496644295302
|
201 |
+
- Accuracy on T-Rex Analogy: 0.5191256830601093
|
202 |
- Accuracy on NELL-ONE Analogy: 0.61
|
203 |
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-e-semeval2012/raw/main/classification.json)):
|
204 |
- Micro F1 score on BLESS: 0.9213500075335241
|