Marwolaeth
commited on
Commit
•
87964d9
1
Parent(s):
ff3e59f
Update README.md
Browse files
README.md
CHANGED
@@ -7,21 +7,36 @@ tags:
|
|
7 |
language:
|
8 |
- ru
|
9 |
metrics:
|
10 |
-
-
|
11 |
-
|
12 |
-
|
13 |
-
-
|
14 |
-
type: f1
|
15 |
-
value: 66.67
|
16 |
-
- name: precision
|
17 |
-
type: precision
|
18 |
-
value: 66.67
|
19 |
-
- name: recall
|
20 |
-
type: recall
|
21 |
-
value: 66.67
|
22 |
base_model:
|
23 |
- cointegrated/rubert-tiny2
|
24 |
pipeline_tag: text-classification
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
---
|
26 |
|
27 |
**⚠️ Disclaimer: This model is in the early stages of development and may produce low-quality predictions. For better results, consider using the recommended Russian natural language inference models available [here](https://huggingface.co/cointegrated).**
|
@@ -72,15 +87,15 @@ print({v: p[k] for k, v in model.config.id2label.items()})
|
|
72 |
|
73 |
The following metrics summarize the performance of the model on the test dataset:
|
74 |
|
75 |
-
| Metric
|
76 |
-
|
77 |
-
| **
|
78 |
-
| **
|
79 |
-
| **
|
80 |
-
| **
|
81 |
-
| **
|
82 |
-
| **
|
83 |
-
| **Samples per Second***
|
84 |
-
| **Steps per Second***
|
85 |
|
86 |
*Using T4 GPU with Google Colab
|
|
|
7 |
language:
|
8 |
- ru
|
9 |
metrics:
|
10 |
+
- accuracy
|
11 |
+
- f1
|
12 |
+
- precision
|
13 |
+
- recall
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
base_model:
|
15 |
- cointegrated/rubert-tiny2
|
16 |
pipeline_tag: text-classification
|
17 |
+
model-index:
|
18 |
+
- name: rubert-tiny-nli-terra-v0
|
19 |
+
results:
|
20 |
+
- task:
|
21 |
+
type: text-classification
|
22 |
+
name: Text Classification
|
23 |
+
dataset:
|
24 |
+
name: TERRA
|
25 |
+
type: NLI
|
26 |
+
split: validation
|
27 |
+
metrics:
|
28 |
+
- type: accuracy
|
29 |
+
value: 0.6677524429967426
|
30 |
+
name: Accuracy
|
31 |
+
- type: macro f1
|
32 |
+
value: 0.6666666666666666
|
33 |
+
name: Macro F1
|
34 |
+
- type: macro precision
|
35 |
+
value: 0.6666666666666666
|
36 |
+
name: Macro Precision
|
37 |
+
- type: macro recall
|
38 |
+
value: 0.6666666666666666
|
39 |
+
name: Macro Recall
|
40 |
---
|
41 |
|
42 |
**⚠️ Disclaimer: This model is in the early stages of development and may produce low-quality predictions. For better results, consider using the recommended Russian natural language inference models available [here](https://huggingface.co/cointegrated).**
|
|
|
87 |
|
88 |
The following metrics summarize the performance of the model on the test dataset:
|
89 |
|
90 |
+
| Metric | Value |
|
91 |
+
|----------------------------------|---------------------------|
|
92 |
+
| **Validation Loss** | 0.6261 |
|
93 |
+
| **Validation Accuracy** | 66.78% |
|
94 |
+
| **Validation F1 Score** | 66.67% |
|
95 |
+
| **Validation Precision** | 66.67% |
|
96 |
+
| **Validation Recall** | 66.67% |
|
97 |
+
| **Validation Runtime*** | 0.7043 seconds |
|
98 |
+
| **Samples per Second*** | 435.88 |
|
99 |
+
| **Steps per Second*** | 14.20 |
|
100 |
|
101 |
*Using T4 GPU with Google Colab
|