File size: 2,513 Bytes
29a4e15
 
faeb0e2
29a4e15
 
 
 
cbd928f
 
faeb0e2
 
b6f4e63
 
9ebfd1f
 
 
b7e32c0
 
 
 
7611eab
edc80ee
b7e32c0
9ebfd1f
b7e32c0
9ebfd1f
 
21e4c99
 
 
 
1473c4c
9ebfd1f
4e3c8b4
 
9ebfd1f
 
10a75a5
 
95a8e35
9ebfd1f
95a8e35
4250cb5
 
37f5b91
e12f44e
9c2b609
c3fe1a0
 
4250cb5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
task_categories:
- text2text-generation
language:
- en
size_categories:
- 1K<n<10K
tags:
- schema-summarization
modalities:
- Text
---

# Dataset Card for schema-summarization_spider
## Dataset Description
### Dataset Summary
This dataset has been built to train and benchmark models uppon the schema-summarization task. This task aims to generate the smallest schema needed to answer a NL question with the help of the original database schema.
This dataset has been build by crossing these two datasets :
- `xlangai/spider`
- `richardr1126/spider-schema`

With the first dataset we take the natural language question and the SQL query. With the second dataset we take the associated schema of the database id used to answer the question. We then generate the summarized-schema with the help of the SQL query

### Languages
As the `xlangai/spider` and `richardr1126/spider-schema` are only labelled in english, this dataset is also labelled in english
## Dataset Structure
### Data Fields
- **db_id** : The Database name
- **question** : The natural language question
- **schema** : The full schema
- **summarized-schema** : A subset of the full schema to answer the question
- **shrink-score** : The percentage of columns removed from the original schema
### Data Splits
- **train** : 6985 questions, schema and summarized-schema tuples
- **validation** : 1032 questions, schema and summarized-schema tuples
## Dataset Creation
### Process
So in order to create the summarized schema we proceded into several steps.
First we go through every words in the orginal SQL query and see if it matches any column names in the original schema. And we add every column that we find this way.
In order to leverage the '*' wildcard we automatically include the primary key of each table that is within the original SQL query
### Source Data
As explained above the natural question and SQL queries that answers this question are extracted from the `xlangai/spider` dataset and the databases schemas are extracted from the `richardr1126/spider-schema` dataset.

## TODO
- [x] Fix rows with empty summarized-schema
- [x] Fix overwhelmingly long summarized-schema. Sometimes the needed columns has the same name in diferent tables. We need to only include it once (Maybe ?)
- [x] Remove primary key from summarized-schema when the '*' wildcard is not used
- [x] Add a shrinking score 
- [ ] Prompt engineer `Llama3.1:70b` with a 1-shot example to generate better summarized-schema
- [ ] Find a way to add data from WikiSQL