File size: 6,784 Bytes
e9bfdd3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c0240aa
e9bfdd3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c0240aa
e9bfdd3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8acf9a3
 
 
 
 
 
 
 
 
 
e9bfdd3
9012b42
1dff0e6
 
9012b42
1dff0e6
 
 
 
 
 
 
 
 
 
 
 
9012b42
 
 
 
 
1dff0e6
17d6e2b
 
1dff0e6
 
8acf9a3
 
1dff0e6
8acf9a3
 
 
 
 
1dff0e6
 
 
8acf9a3
1dff0e6
 
 
 
 
8acf9a3
 
1dff0e6
9012b42
1dff0e6
 
 
8acf9a3
1dff0e6
 
 
8acf9a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1dff0e6
 
9012b42
8acf9a3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
---
dataset_info:
- config_name: ru_pauq_tl
  features:
  - name: id
    dtype: string
  - name: db_id
    dtype: string
  - name: source
    dtype: string
  - name: type
    dtype: string
  - name: question
    dtype: string
  - name: query
    dtype: string
  - name: sql
    sequence: string
  - name: question_toks
    sequence: string
  - name: query_toks
    sequence: string
  - name: query_toks_no_values
    sequence: string
  - name: masked_query
    dtype: string
  splits:
  - name: train
    num_bytes: 8188471
    num_examples: 6558
  - name: test
    num_bytes: 2284950
    num_examples: 1979
  download_size: 315047611
  dataset_size: 10473421
- config_name: en_pauq_tl
  features:
  - name: id
    dtype: string
  - name: db_id
    dtype: string
  - name: source
    dtype: string
  - name: type
    dtype: string
  - name: question
    dtype: string
  - name: query
    dtype: string
  - name: sql
    sequence: string
  - name: question_toks
    sequence: string
  - name: query_toks
    sequence: string
  - name: query_toks_no_values
    sequence: string
  - name: masked_query
    dtype: string
  splits:
  - name: train
    num_bytes: 7433812
    num_examples: 6559
  - name: test
    num_bytes: 2017972
    num_examples: 1975
  download_size: 315047611
  dataset_size: 9451784
- config_name: ru_pauq_iid
  features:
  - name: id
    dtype: string
  - name: db_id
    dtype: string
  - name: source
    dtype: string
  - name: type
    dtype: string
  - name: question
    dtype: string
  - name: query
    dtype: string
  - name: sql
    sequence: string
  - name: question_toks
    sequence: string
  - name: query_toks
    sequence: string
  - name: query_toks_no_values
    sequence: string
  - name: masked_query
    dtype: string
  splits:
  - name: train
    num_bytes: 9423175
    num_examples: 8800
  - name: test
    num_bytes: 1069135
    num_examples: 1074
  download_size: 315047611
  dataset_size: 10492310
- config_name: en_pauq_iid
  features:
  - name: id
    dtype: string
  - name: db_id
    dtype: string
  - name: source
    dtype: string
  - name: type
    dtype: string
  - name: question
    dtype: string
  - name: query
    dtype: string
  - name: sql
    sequence: string
  - name: question_toks
    sequence: string
  - name: query_toks
    sequence: string
  - name: query_toks_no_values
    sequence: string
  - name: masked_query
    dtype: string
  splits:
  - name: train
    num_bytes: 8505951
    num_examples: 8800
  - name: test
    num_bytes: 964008
    num_examples: 1076
  download_size: 315047611
  dataset_size: 9469959
license: cc-by-4.0
task_categories:
- translation
- text2text-generation
language:
- ru
tags:
- text-to-sql
size_categories:
- 10K<n<100K
---
# Dataset Card for [Dataset Name]

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Languages](#languages)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
- [Additional Information](#additional-information)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**

Link to databases: https://drive.google.com/file/d/1Xjbp207zfCaBxhPgt-STB_RxwNo2TIW2/view

### Dataset Summary

The Russian version of the [Spider](https://yale-lily.github.io/spider) - Yale Semantic Parsing and Text-to-SQL Dataset.
Major changings:

- Adding (not replacing) new Russian language values in DB tables. Table and DB names remain the original.
- Localization of natural language questions into Russian. All DB values replaced by new.
- Changing in SQL-queries filters.
- Filling empty table with values.
- Complementing the dataset with the new samples of underrepresented types.

### Languages

Russian

## Dataset Creation

### Curation Rationale

The translation from English to Russian is undertaken by a professional human translator with SQL-competence. A verification of the translated questions and their conformity with the queries, and an updating of the databases are undertaken by 4 computer science students.
Details are in the [section 3](https://aclanthology.org/2022.findings-emnlp.175.pdf).

## Additional Information

### Licensing Information

The presented dataset have been collected in a manner which is consistent with the terms of use of the original Spider, which is distributed under the CC BY-SA 4.0 license. 

### Citation Information

[Paper link](https://aclanthology.org/2022.findings-emnlp.175.pdf)

```
@inproceedings{bakshandaeva-etal-2022-pauq,
    title = "{PAUQ}: Text-to-{SQL} in {R}ussian",
    author = "Bakshandaeva, Daria  and
      Somov, Oleg  and
      Dmitrieva, Ekaterina  and
      Davydova, Vera  and
      Tutubalina, Elena",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, United Arab Emirates",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.findings-emnlp.175",
    pages = "2355--2376",
    abstract = "Semantic parsing is an important task that allows to democratize human-computer interaction. One of the most popular text-to-SQL datasets with complex and diverse natural language (NL) questions and SQL queries is Spider. We construct and complement a Spider dataset for Russian, thus creating the first publicly available text-to-SQL dataset for this language. While examining its components - NL questions, SQL queries and databases content - we identify limitations of the existing database structure, fill out missing values for tables and add new requests for underrepresented categories. We select thirty functional test sets with different features that can be used for the evaluation of neural models{'} abilities. To conduct the experiments, we adapt baseline architectures RAT-SQL and BRIDGE and provide in-depth query component analysis. On the target language, both models demonstrate strong results with monolingual training and improved accuracy in multilingual scenario. In this paper, we also study trade-offs between machine-translated and manually-created NL queries. At present, Russian text-to-SQL is lacking in datasets as well as trained models, and we view this work as an important step towards filling this gap.",
}
```

### Contributions

Thanks to [@gugutse](https://github.com/Gugutse), [@runnerup96](https://github.com/runnerup96), [@dmi3eva](https://github.com/dmi3eva), [@veradavydova](https://github.com/VeraDavydova), [@tutubalinaev](https://github.com/tutubalinaev) for adding this dataset.