File size: 8,966 Bytes
a559968
 
 
 
 
c1a6aa5
1242793
 
 
6b86f1d
 
 
a559968
63a9689
 
 
4108b72
23b1045
 
 
 
 
0425c6a
23b1045
63a9689
 
 
 
 
 
 
4108b72
 
63a9689
b7f7b60
7a3e648
63a9689
 
 
5d48856
 
735222c
 
 
 
 
 
 
 
 
 
5d48856
 
 
 
735222c
 
5d48856
4108b72
 
 
5d48856
 
 
 
7a3e648
 
4108b72
 
7a3e648
 
 
 
4108b72
 
7a3e648
b7f7b60
7a3e648
 
 
 
 
63a9689
 
 
 
4108b72
 
735222c
5d48856
27ce514
63a9689
b7f7b60
 
 
 
 
 
 
4108b72
 
b7f7b60
 
 
 
 
 
 
 
 
 
 
4108b72
 
b7f7b60
 
 
 
 
 
 
 
 
 
 
4108b72
 
b7f7b60
 
 
 
 
 
 
 
 
 
 
4108b72
 
b7f7b60
 
 
 
 
 
 
 
 
4108b72
 
 
 
 
 
 
 
 
 
 
 
 
 
b7f7b60
44c40a5
63a9689
44c40a5
23b1045
d406bb1
44c40a5
 
4108b72
 
44c40a5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4108b72
 
 
 
44c40a5
 
 
 
 
 
 
4108b72
d406bb1
4108b72
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
---
license: mit
language_bcp47:
- ru-RU
tags:
- spellchecking
language:
- ru
size_categories:
- 100K<n<1M
task_categories:
- text2text-generation
---

### Dataset Summary

This dataset is a set of samples for training and testing the spell checking, grammar error correction and ungrammatical text detection models.

The dataset contains two splits:

test.json contains samples hand-selected to evaluate the quality of models.

train.json contains synthetic samples generated in various ways.

The purpose of creating the dataset was to test an internal spellchecker for [a generative poetry project](https://github.com/Koziev/verslibre), but it can also be useful in other projects, since it does not have an explicit specialization for poetry.

### Example

```
{
        "id": 1483,
        "text": "Разучи стихов по больше",
        "fixed_text": "Разучи стихов побольше",
        "label": 0,
        "error_type": "Tokenization",
        "domain": "prose"
}
```

### Notes

Using "e" instead of "ё" **is not** considered a text defect. So both *"Зеленый клен еще цветет"* and *"Зелёный клён ещё цветёт"*
are considered acceptable.

Incorrect letter case **is not** considered a defect. In particular, the first word in a sentence **does not** have to begin
with a capital letter. Therefore, both *"Пушкин был поэтом"* and *"пушкин был поэтом"* are considered
equally acceptable. Moreover, all kinds of methods of highlighting text through its capitalization are
not considered a defect, for example, *"Не говори ни ДА, ни НЕТ"*

The absence of a period, exclamation mark or question mark at the end of a single sentence **is not** considered a defect.

The test split contains only examples of mistakes made by people. There are no synthetics among these mistakes.

The examples of errors in the test split come from different people in terms of gender, age, education, context, and social context.

The input and output text can be not only one sentence, but also 1) a part of a sentence, 2) incomplete dialog response,
3) several sentences, e.g. a paragraph, 4) a fragment of a poem, usually a quatrain or two quatrains.

The texts may include offensive phrases, phrases that offend religious or political feelings, fragments that contradict moral standards, etc.
Such samples are only needed to make the corpus as representative as possible for the tasks of processing messages
in various media such as blogs, comments, etc.

One sample may contain several errors of different types.


### Poetry samples

The texts of the poems are included in the test part of the dataset, which makes it unique among similar
datasets for the Russian language:

```
{
        "id": 24,
        "text": "Чему научит забытьё?\nСмерть формы д'арует литьё.\nРезец мгновенье любит стружка...\nСмерть безобидная подружка!",
        "fixed_text": null,
        "label": 0,
        "error_type": "Grammar",
        "domain": "poetry"
}
```



### Dataset fields

**id** (int64): the sentence's id, starting 1.  
**text** (str): the original text (part of sentence, whole sentence or several sentences).  
**fixed_text** (str): the corrected version of original text.  
**label** (str): the target class. "1" for "no defects", "0" for "contains defects".  
**error_type** (str): the violation category: Spelling, Grammar, Tokenization, Punctuation, Mixture, Unknown.  
**domain** (str): domain: "prose" or "poetry".

### Error types

**Tokenization**: a word is split into two tokens, or two words are merged into one word.

```
{
        "id": 6,
        "text": "Я подбираю по проще слова",
        "fixed_text": "Я подбираю попроще слова",
        "label": 0,
        "error_type": "Tokenization",
        "domain": "prose"
}
```

**Punctuation**: missing or extra comma, hyphen or other punctuation mark

```
{
        "id": 5,
        "text": "И швырнуть по-дальше",
        "fixed_text": "И швырнуть подальше",
        "label": 0,
        "error_type": "Punctuation",
        "domain": "prose"
}
```

**Spelling**:

```
{
        "id": 38,
        "text": "И ведь что интересно, русские официально ни в одном крестовом позоде не участвовали.",
        "fixed_text": "И ведь что интересно, русские официально ни в одном крестовом походе не участвовали.",
        "label": 0,
        "error_type": "Spelling",
        "domain": "prose"
}
```

**Grammar**: One of the words is in the wrong grammatical form, for example the verb is in the infinitive instead of the personal form.

```
{
        "id": 61,
        "text": "на него никто не польститься",
        "fixed_text": "на него никто не польстится",
        "label": 0,
        "error_type": "Grammar",
        "domain": "prose"
}
```

Please note that error categories are not always set accurately, so you should not use
the "error_type" field to train classifiers.

### Uncensoring samples

A number of samples contain text with explicit obscenities:

```
{
        "id": 1,
        "text": "Но не простого - с лёгкой еб@нцой.",
        "fixed_text": "Но не простого - с лёгкой ебанцой.",
        "label": 0,
        "error_type": "Misspelling",
        "domain": "prose"
}
```

### Statistics for test split

Number of samples per domain:

```
prose   25012  
poetry  2500  
```


Fix categories for 'poetry' domain:
```
+-----------------------------+-------+-------+
| Category                    | Count | Share |
+-----------------------------+-------+-------+
| punctuation:redundant_comma | 955   | 0.35  |
|                             | 465   | 0.17  |
| tokenization:prefix↦↤word   | 420   | 0.15  |
| punctuation:missing_comma   | 354   | 0.13  |
| punctuation                 | 201   | 0.07  |
| spelling                    | 135   | 0.05  |
| grammar                     | 132   | 0.05  |
| не ↔ ни                     | 31    | 0.01  |
| spelling:ться ↔ тся         | 30    | 0.01  |
| tokenization:не|ни          | 5     | 0.0   |
| letter casing               | 2     | 0.0   |
+-----------------------------+-------+-------+
```


Number of edits required to obtain a corrected version of the text:
```
+-----------------+-------------------+------------------+
| Number of edits | Number of samples | Share of samples |
+-----------------+-------------------+------------------+
| 1               | 646               | 0.5              |
| 2               | 303               | 0.23             |
| 3               | 154               | 0.12             |
| 4               | 79                | 0.06             |
| 5               | 45                | 0.03             |
| 0               | 2                 | 0.0              |
| >5              | 63                | 0.05             |
+-----------------+-------------------+------------------+
```


Fix categories for 'prose' domain:
```
+-----------------------------+-------+-------+
| Category                    | Count | Share |
+-----------------------------+-------+-------+
|                             | 2592  | 0.34  |
| tokenization:prefix↦↤word   | 1691  | 0.22  |
| grammar                     | 1264  | 0.16  |
| spelling                    | 918   | 0.12  |
| punctuation                 | 447   | 0.06  |
| punctuation:missing_comma   | 429   | 0.06  |
| punctuation:redundant_comma | 147   | 0.02  |
| spelling:ться ↔ тся         | 118   | 0.02  |
| не ↔ ни                     | 77    | 0.01  |
| tokenization:не|ни          | 30    | 0.0   |
| letter casing               | 23    | 0.0   |
+-----------------------------+-------+-------+
```

Number of edits required to obtain a corrected version of the text:
```
+-----------------+-------------------+------------------+
| Number of edits | Number of samples | Share of samples |
+-----------------+-------------------+------------------+
| 1               | 5974              | 0.89             |
| 2               | 570               | 0.08             |
| 3               | 126               | 0.02             |
| 4               | 41                | 0.01             |
| 0               | 18                | 0.0              |
| 5               | 9                 | 0.0              |
| >5              | 5                 | 0.0              |
+-----------------+-------------------+------------------+
```

## See also

[RuCOLA](https://huggingface.co/datasets/RussianNLP/rucola)  
[ai-forever/spellcheck_benchmark](https://huggingface.co/datasets/ai-forever/spellcheck_benchmark)