jinunyachhyon
commited on
Commit
•
c8511fc
1
Parent(s):
df4edc7
Add evaluation results for IRIISNEPAL/RoBERTa_Nepali_110M on Nep-gLUE tasks
Browse files
README.md
CHANGED
@@ -12,6 +12,60 @@ language:
|
|
12 |
metrics:
|
13 |
- f1
|
14 |
- accuracy
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
---
|
16 |
|
17 |
# Model Card for Model ID
|
@@ -144,8 +198,7 @@ The model was evaluated on the [Nepali Language Evaluation Benchmark (Nep-gLUE)]
|
|
144 |
|
145 |
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
146 |
|
147 |
-
-
|
148 |
-
- F1 Score
|
149 |
|
150 |
### Results
|
151 |
|
|
|
12 |
metrics:
|
13 |
- f1
|
14 |
- accuracy
|
15 |
+
model-index:
|
16 |
+
- name: IRIISNEPAL/RoBERTa_Nepali_110M
|
17 |
+
results:
|
18 |
+
- task:
|
19 |
+
type: token-classification
|
20 |
+
name: Named Entity Recognition (NER)
|
21 |
+
dataset:
|
22 |
+
type: nep_glue
|
23 |
+
name: Nep-gLUE
|
24 |
+
metrics:
|
25 |
+
- type: f1
|
26 |
+
name: Macro F1 Score
|
27 |
+
value: 93.74
|
28 |
+
|
29 |
+
- task:
|
30 |
+
type: token-classification
|
31 |
+
name: Part-of-Speech (POS) Tagging
|
32 |
+
dataset:
|
33 |
+
type: nep_glue
|
34 |
+
name: Nep-gLUE
|
35 |
+
metrics:
|
36 |
+
- type: f1
|
37 |
+
name: Macro F1 Score
|
38 |
+
value: 97.52
|
39 |
+
|
40 |
+
- task:
|
41 |
+
type: text-classification
|
42 |
+
name: Categorical Classification (CC)
|
43 |
+
dataset:
|
44 |
+
type: nep_glue
|
45 |
+
name: Nep-gLUE
|
46 |
+
metrics:
|
47 |
+
- type: f1
|
48 |
+
name: Macro F1 Score
|
49 |
+
value: 94.68
|
50 |
+
|
51 |
+
- task:
|
52 |
+
type: similarity
|
53 |
+
name: Categorical Pair Similarity (CPS)
|
54 |
+
dataset:
|
55 |
+
type: nep_glue
|
56 |
+
name: Nep-gLUE
|
57 |
+
metrics:
|
58 |
+
- type: f1
|
59 |
+
name: Macro F1 Score
|
60 |
+
value: 96.49
|
61 |
+
|
62 |
+
- task:
|
63 |
+
type: overall-benchmark
|
64 |
+
name: Nep-gLUE
|
65 |
+
metrics:
|
66 |
+
- type: score
|
67 |
+
name: Overall Score
|
68 |
+
value: 95.60
|
69 |
---
|
70 |
|
71 |
# Model Card for Model ID
|
|
|
198 |
|
199 |
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
200 |
|
201 |
+
- **Macro-F1** for the [Nepali Language Understanding Evaluation (Nep-gLUE)](https://nepberta.github.io/nepglue/) benchmark.
|
|
|
202 |
|
203 |
### Results
|
204 |
|