Oleg Somov commited on
Commit
5e70b4c
1 Parent(s): 85ef8aa

fix readme

Browse files
Files changed (1) hide show
  1. README.md +152 -74
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  dataset_info:
3
- - config_name: ru_pauq_tl
4
  features:
5
  - name: id
6
  dtype: string
@@ -22,18 +22,14 @@ dataset_info:
22
  sequence: string
23
  - name: query_toks_no_values
24
  sequence: string
25
- - name: masked_query
26
  dtype: string
27
  splits:
28
  - name: train
29
- num_bytes: 8188471
30
- num_examples: 6558
31
  - name: test
32
- num_bytes: 2284950
33
- num_examples: 1979
34
- download_size: 315047611
35
- dataset_size: 10473421
36
- - config_name: en_pauq_tl
37
  features:
38
  - name: id
39
  dtype: string
@@ -55,18 +51,14 @@ dataset_info:
55
  sequence: string
56
  - name: query_toks_no_values
57
  sequence: string
58
- - name: masked_query
59
  dtype: string
60
  splits:
61
  - name: train
62
- num_bytes: 7433812
63
- num_examples: 6559
64
  - name: test
65
- num_bytes: 2017972
66
- num_examples: 1975
67
- download_size: 315047611
68
- dataset_size: 9451784
69
- - config_name: ru_pauq_iid
70
  features:
71
  - name: id
72
  dtype: string
@@ -88,18 +80,14 @@ dataset_info:
88
  sequence: string
89
  - name: query_toks_no_values
90
  sequence: string
91
- - name: masked_query
92
  dtype: string
93
  splits:
94
  - name: train
95
- num_bytes: 9423175
96
- num_examples: 8800
97
  - name: test
98
- num_bytes: 1069135
99
- num_examples: 1074
100
- download_size: 315047611
101
- dataset_size: 10492310
102
- - config_name: en_pauq_iid
103
  features:
104
  - name: id
105
  dtype: string
@@ -121,27 +109,71 @@ dataset_info:
121
  sequence: string
122
  - name: query_toks_no_values
123
  sequence: string
124
- - name: masked_query
125
  dtype: string
126
  splits:
127
  - name: train
128
- num_bytes: 8505951
129
- num_examples: 8800
130
  - name: test
131
- num_bytes: 964008
132
- num_examples: 1076
133
- download_size: 315047611
134
- dataset_size: 9469959
135
- license: cc-by-4.0
136
- task_categories:
137
- - translation
138
- - text2text-generation
139
- language:
140
- - ru
141
- tags:
142
- - text-to-sql
143
- size_categories:
144
- - 10K<n<100K
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
145
  ---
146
  # Dataset Card for [Dataset Name]
147
 
@@ -149,10 +181,23 @@ size_categories:
149
  - [Table of Contents](#table-of-contents)
150
  - [Dataset Description](#dataset-description)
151
  - [Dataset Summary](#dataset-summary)
 
152
  - [Languages](#languages)
 
 
 
 
153
  - [Dataset Creation](#dataset-creation)
154
  - [Curation Rationale](#curation-rationale)
 
 
 
 
 
 
 
155
  - [Additional Information](#additional-information)
 
156
  - [Licensing Information](#licensing-information)
157
  - [Citation Information](#citation-information)
158
  - [Contributions](#contributions)
@@ -169,55 +214,88 @@ Link to databases: https://drive.google.com/file/d/1Xjbp207zfCaBxhPgt-STB_RxwNo2
169
 
170
  ### Dataset Summary
171
 
172
- The Russian version of the [Spider](https://yale-lily.github.io/spider) - Yale Semantic Parsing and Text-to-SQL Dataset.
173
- Major changings:
174
 
175
- - Adding (not replacing) new Russian language values in DB tables. Table and DB names remain the original.
176
- - Localization of natural language questions into Russian. All DB values replaced by new.
177
- - Changing in SQL-queries filters.
178
- - Filling empty table with values.
179
- - Complementing the dataset with the new samples of underrepresented types.
180
 
181
  ### Languages
182
 
183
- Russian
 
 
 
 
 
 
 
 
 
 
 
 
 
 
184
 
185
  ## Dataset Creation
186
 
187
  ### Curation Rationale
188
 
189
- The translation from English to Russian is undertaken by a professional human translator with SQL-competence. A verification of the translated questions and their conformity with the queries, and an updating of the databases are undertaken by 4 computer science students.
190
- Details are in the [section 3](https://aclanthology.org/2022.findings-emnlp.175.pdf).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
191
 
192
  ## Additional Information
193
 
 
 
 
 
194
  ### Licensing Information
195
 
196
- The presented dataset have been collected in a manner which is consistent with the terms of use of the original Spider, which is distributed under the CC BY-SA 4.0 license.
197
 
198
  ### Citation Information
199
 
200
- [Paper link](https://aclanthology.org/2022.findings-emnlp.175.pdf)
201
-
202
- ```
203
- @inproceedings{bakshandaeva-etal-2022-pauq,
204
- title = "{PAUQ}: Text-to-{SQL} in {R}ussian",
205
- author = "Bakshandaeva, Daria and
206
- Somov, Oleg and
207
- Dmitrieva, Ekaterina and
208
- Davydova, Vera and
209
- Tutubalina, Elena",
210
- booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
211
- month = dec,
212
- year = "2022",
213
- address = "Abu Dhabi, United Arab Emirates",
214
- publisher = "Association for Computational Linguistics",
215
- url = "https://aclanthology.org/2022.findings-emnlp.175",
216
- pages = "2355--2376",
217
- abstract = "Semantic parsing is an important task that allows to democratize human-computer interaction. One of the most popular text-to-SQL datasets with complex and diverse natural language (NL) questions and SQL queries is Spider. We construct and complement a Spider dataset for Russian, thus creating the first publicly available text-to-SQL dataset for this language. While examining its components - NL questions, SQL queries and databases content - we identify limitations of the existing database structure, fill out missing values for tables and add new requests for underrepresented categories. We select thirty functional test sets with different features that can be used for the evaluation of neural models{'} abilities. To conduct the experiments, we adapt baseline architectures RAT-SQL and BRIDGE and provide in-depth query component analysis. On the target language, both models demonstrate strong results with monolingual training and improved accuracy in multilingual scenario. In this paper, we also study trade-offs between machine-translated and manually-created NL queries. At present, Russian text-to-SQL is lacking in datasets as well as trained models, and we view this work as an important step towards filling this gap.",
218
- }
219
- ```
220
 
221
  ### Contributions
222
 
223
- Thanks to [@gugutse](https://github.com/Gugutse), [@runnerup96](https://github.com/runnerup96), [@dmi3eva](https://github.com/dmi3eva), [@veradavydova](https://github.com/VeraDavydova), [@tutubalinaev](https://github.com/tutubalinaev) for adding this dataset.
 
1
  ---
2
  dataset_info:
3
+ - config_name: ru_os
4
  features:
5
  - name: id
6
  dtype: string
 
22
  sequence: string
23
  - name: query_toks_no_values
24
  sequence: string
25
+ - name: template
26
  dtype: string
27
  splits:
28
  - name: train
29
+ num_examples: 8800
 
30
  - name: test
31
+ num_examples: 1076
32
+ - config_name: en_os
 
 
 
33
  features:
34
  - name: id
35
  dtype: string
 
51
  sequence: string
52
  - name: query_toks_no_values
53
  sequence: string
54
+ - name: template
55
  dtype: string
56
  splits:
57
  - name: train
58
+ num_examples: 8800
 
59
  - name: test
60
+ num_examples: 1076
61
+ - config_name: ru_trl
 
 
 
62
  features:
63
  - name: id
64
  dtype: string
 
80
  sequence: string
81
  - name: query_toks_no_values
82
  sequence: string
83
+ - name: template
84
  dtype: string
85
  splits:
86
  - name: train
87
+ num_examples: 7890
 
88
  - name: test
89
+ num_examples: 1971
90
+ - config_name: en_trl
 
 
 
91
  features:
92
  - name: id
93
  dtype: string
 
109
  sequence: string
110
  - name: query_toks_no_values
111
  sequence: string
112
+ - name: template
113
  dtype: string
114
  splits:
115
  - name: train
116
+ num_examples: 7890
 
117
  - name: test
118
+ num_examples: 1974
119
+ - config_name: ru_tsl
120
+ features:
121
+ - name: id
122
+ dtype: string
123
+ - name: db_id
124
+ dtype: string
125
+ - name: source
126
+ dtype: string
127
+ - name: type
128
+ dtype: string
129
+ - name: question
130
+ dtype: string
131
+ - name: query
132
+ dtype: string
133
+ - name: sql
134
+ sequence: string
135
+ - name: question_toks
136
+ sequence: string
137
+ - name: query_toks
138
+ sequence: string
139
+ - name: query_toks_no_values
140
+ sequence: string
141
+ - name: template
142
+ dtype: string
143
+ splits:
144
+ - name: train
145
+ num_examples: 7900
146
+ - name: test
147
+ num_examples: 1969
148
+ - config_name: en_tsl
149
+ features:
150
+ - name: id
151
+ dtype: string
152
+ - name: db_id
153
+ dtype: string
154
+ - name: source
155
+ dtype: string
156
+ - name: type
157
+ dtype: string
158
+ - name: question
159
+ dtype: string
160
+ - name: query
161
+ dtype: string
162
+ - name: sql
163
+ sequence: string
164
+ - name: question_toks
165
+ sequence: string
166
+ - name: query_toks
167
+ sequence: string
168
+ - name: query_toks_no_values
169
+ sequence: string
170
+ - name: template
171
+ dtype: string
172
+ splits:
173
+ - name: train
174
+ num_examples: 7900
175
+ - name: test
176
+ num_examples: 1974
177
  ---
178
  # Dataset Card for [Dataset Name]
179
 
 
181
  - [Table of Contents](#table-of-contents)
182
  - [Dataset Description](#dataset-description)
183
  - [Dataset Summary](#dataset-summary)
184
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
185
  - [Languages](#languages)
186
+ - [Dataset Structure](#dataset-structure)
187
+ - [Data Instances](#data-instances)
188
+ - [Data Fields](#data-fields)
189
+ - [Data Splits](#data-splits)
190
  - [Dataset Creation](#dataset-creation)
191
  - [Curation Rationale](#curation-rationale)
192
+ - [Source Data](#source-data)
193
+ - [Annotations](#annotations)
194
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
195
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
196
+ - [Social Impact of Dataset](#social-impact-of-dataset)
197
+ - [Discussion of Biases](#discussion-of-biases)
198
+ - [Other Known Limitations](#other-known-limitations)
199
  - [Additional Information](#additional-information)
200
+ - [Dataset Curators](#dataset-curators)
201
  - [Licensing Information](#licensing-information)
202
  - [Citation Information](#citation-information)
203
  - [Contributions](#contributions)
 
214
 
215
  ### Dataset Summary
216
 
217
+ [More Information Needed]
 
218
 
219
+ ### Supported Tasks and Leaderboards
220
+
221
+ [More Information Needed]
 
 
222
 
223
  ### Languages
224
 
225
+ [More Information Needed]
226
+
227
+ ## Dataset Structure
228
+
229
+ ### Data Instances
230
+
231
+ [More Information Needed]
232
+
233
+ ### Data Fields
234
+
235
+ [More Information Needed]
236
+
237
+ ### Data Splits
238
+
239
+ [More Information Needed]
240
 
241
  ## Dataset Creation
242
 
243
  ### Curation Rationale
244
 
245
+ [More Information Needed]
246
+
247
+ ### Source Data
248
+
249
+ #### Initial Data Collection and Normalization
250
+
251
+ [More Information Needed]
252
+
253
+ #### Who are the source language producers?
254
+
255
+ [More Information Needed]
256
+
257
+ ### Annotations
258
+
259
+ #### Annotation process
260
+
261
+ [More Information Needed]
262
+
263
+ #### Who are the annotators?
264
+
265
+ [More Information Needed]
266
+
267
+ ### Personal and Sensitive Information
268
+
269
+ [More Information Needed]
270
+
271
+ ## Considerations for Using the Data
272
+
273
+ ### Social Impact of Dataset
274
+
275
+ [More Information Needed]
276
+
277
+ ### Discussion of Biases
278
+
279
+ [More Information Needed]
280
+
281
+ ### Other Known Limitations
282
+
283
+ [More Information Needed]
284
 
285
  ## Additional Information
286
 
287
+ ### Dataset Curators
288
+
289
+ [More Information Needed]
290
+
291
  ### Licensing Information
292
 
293
+ [More Information Needed]
294
 
295
  ### Citation Information
296
 
297
+ [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
298
 
299
  ### Contributions
300
 
301
+ Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.