File size: 3,578 Bytes
b07a95b
 
a88d3ab
 
 
ff3e59f
a88d3ab
 
 
87964d9
 
 
 
a88d3ab
 
 
87964d9
 
 
 
 
 
 
 
 
 
 
 
 
 
e37312c
87964d9
e37312c
 
87964d9
e37312c
 
87964d9
e37312c
b07a95b
 
a88d3ab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bd03468
a88d3ab
87964d9
 
 
 
 
 
 
 
 
 
a88d3ab
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
library_name: transformers
tags:
- nli
- bert
- natural-language-inference
language:
- ru
metrics:
- accuracy
- f1
- precision
- recall
base_model:
- cointegrated/rubert-tiny2
pipeline_tag: text-classification
model-index:
- name: rubert-tiny-nli-terra-v0
  results:
  - task:
      type: text-classification
      name: Text Classification
    dataset:
      name: TERRA
      type: NLI
      split: validation
    metrics:
    - type: accuracy
      value: 0.6677524429967426
      name: Accuracy
    - type: f1
      value: 0.6666666666666666
      name: F1
    - type: precision
      value: 0.6666666666666666
      name: Precision
    - type: recall
      value: 0.6666666666666666
      name: Recall
---

**⚠️ Disclaimer: This model is in the early stages of development and may produce low-quality predictions. For better results, consider using the recommended Russian natural language inference models available [here](https://huggingface.co/cointegrated).**

# RuBERT-tiny-nli v0

This model is an initial attempt to fine-tune the [RuBERT-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) model for a two-way natural language inference task, utilizing the Russian [Textual Entailment Recognition](https://russiansuperglue.com/tasks/task_info/TERRa) dataset. While it aims to enhance understanding of Russian text, its performance is currently limited.


## Usage
How to run the model for NLI:

```python
# !pip install transformers sentencepiece --quiet
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification

model_id = 'Marwolaeth/rubert-tiny-nli-terra-v0'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
if torch.cuda.is_available():
    model.cuda()

# An example from the base model card
premise1 = 'Сократ - человек, а все люди смертны.'
hypothesis1 = 'Сократ никогда не умрёт.'
with torch.inference_mode():
    prediction = model(
      **tokenizer(premise1, hypothesis1, return_tensors='pt').to(model.device)
    )
    p = torch.softmax(prediction.logits, -1).cpu().numpy()[0]
print({v: p[k] for k, v in model.config.id2label.items()})
# {'not_entailment': 0.7698182, 'entailment': 0.23018183}

# An example concerning sentiments
premise2 = 'Я ненавижу желтые занавески'
hypothesis2 = 'Мне нравятся желтые занавески'
with torch.inference_mode():
    prediction = model(
      **tokenizer(premise2, hypothesis2, return_tensors='pt').to(model.device)
    )
    p = torch.softmax(prediction.logits, -1).cpu().numpy()[0]
print({v: p[k] for k, v in model.config.id2label.items()})
# {'not_entailment': 0.60584205, 'entailment': 0.3941579}
```

## Model Performance Metrics

The following metrics summarize the performance of the model on the validation dataset:

| Metric                           | Value                     |
|----------------------------------|---------------------------|
| **Validation Loss**              | 0.6261                    |
| **Validation Accuracy**          | 66.78%                    |
| **Validation F1 Score**          | 66.67%                    |
| **Validation Precision**         | 66.67%                    |
| **Validation Recall**            | 66.67%                    |
| **Validation Runtime***          | 0.7043 seconds            |
| **Samples per Second***          | 435.88                    |
| **Steps per Second***            | 14.20                     |

*Using T4 GPU with Google Colab