model update
Browse files
README.md
CHANGED
@@ -102,7 +102,7 @@ model-index:
|
|
102 |
metrics:
|
103 |
- name: Accuracy
|
104 |
type: accuracy
|
105 |
-
value: 0.
|
106 |
- task:
|
107 |
name: Analogy Questions (NELL-ONE Analogy)
|
108 |
type: multiple-choice-qa
|
@@ -198,7 +198,7 @@ This model achieves the following results on the relation understanding tasks:
|
|
198 |
- Accuracy on U4: 0.4699074074074074
|
199 |
- Accuracy on Google: 0.692
|
200 |
- Accuracy on ConceptNet Analogy: 0.1535234899328859
|
201 |
-
- Accuracy on T-Rex Analogy: 0.
|
202 |
- Accuracy on NELL-ONE Analogy: 0.63
|
203 |
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-e-t-rex/raw/main/classification.json)):
|
204 |
- Micro F1 score on BLESS: 0.8882025011300286
|
|
|
102 |
metrics:
|
103 |
- name: Accuracy
|
104 |
type: accuracy
|
105 |
+
value: 0.7431693989071039
|
106 |
- task:
|
107 |
name: Analogy Questions (NELL-ONE Analogy)
|
108 |
type: multiple-choice-qa
|
|
|
198 |
- Accuracy on U4: 0.4699074074074074
|
199 |
- Accuracy on Google: 0.692
|
200 |
- Accuracy on ConceptNet Analogy: 0.1535234899328859
|
201 |
+
- Accuracy on T-Rex Analogy: 0.7431693989071039
|
202 |
- Accuracy on NELL-ONE Analogy: 0.63
|
203 |
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-e-t-rex/raw/main/classification.json)):
|
204 |
- Micro F1 score on BLESS: 0.8882025011300286
|