model update
Browse files
README.md
CHANGED
@@ -103,6 +103,17 @@ model-index:
|
|
103 |
- name: Accuracy
|
104 |
type: accuracy
|
105 |
value: 0.45901639344262296
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
106 |
- task:
|
107 |
name: Lexical Relation Classification (BLESS)
|
108 |
type: classification
|
@@ -188,6 +199,7 @@ This model achieves the following results on the relation understanding tasks:
|
|
188 |
- Accuracy on Google: 0.92
|
189 |
- Accuracy on ConceptNet Analogy: 0.3196308724832215
|
190 |
- Accuracy on T-Rex Analogy: 0.45901639344262296
|
|
|
191 |
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-triplet-c-semeval2012/raw/main/classification.json)):
|
192 |
- Micro F1 score on BLESS: 0.9025161970769926
|
193 |
- Micro F1 score on CogALexV: 0.8798122065727699
|
|
|
103 |
- name: Accuracy
|
104 |
type: accuracy
|
105 |
value: 0.45901639344262296
|
106 |
+
- task:
|
107 |
+
name: Analogy Questions (NELL-ONE Analogy)
|
108 |
+
type: multiple-choice-qa
|
109 |
+
dataset:
|
110 |
+
name: NELL-ONE Analogy
|
111 |
+
args: relbert/analogy_questions
|
112 |
+
type: analogy-questions
|
113 |
+
metrics:
|
114 |
+
- name: Accuracy
|
115 |
+
type: accuracy
|
116 |
+
value: 0.6233333333333333
|
117 |
- task:
|
118 |
name: Lexical Relation Classification (BLESS)
|
119 |
type: classification
|
|
|
199 |
- Accuracy on Google: 0.92
|
200 |
- Accuracy on ConceptNet Analogy: 0.3196308724832215
|
201 |
- Accuracy on T-Rex Analogy: 0.45901639344262296
|
202 |
+
- Accuracy on NELL-ONE Analogy: 0.6233333333333333
|
203 |
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-triplet-c-semeval2012/raw/main/classification.json)):
|
204 |
- Micro F1 score on BLESS: 0.9025161970769926
|
205 |
- Micro F1 score on CogALexV: 0.8798122065727699
|