Datasets:
File size: 3,313 Bytes
22b3753 a039525 22b3753 17ecaa2 22b3753 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: query
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: table_names
sequence: string
- name: tables
sequence: string
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 9440328
num_examples: 530
- name: validation
num_bytes: 829668
num_examples: 49
- name: test
num_bytes: 4626906
num_examples: 253
download_size: 1988975
dataset_size: 14896902
task_categories:
- table-question-answering
---
# Dataset Card for "geoQuery-tableQA"
# Usage
```python
import pandas as pd
from datasets import load_dataset
geoQuery_tableQA = load_dataset("vaishali/geoQuery-tableQA")
for sample in geoQuery_tableQA['train']:
question = sample['question']
input_table_names = sample["table_names"]
input_tables = [pd.read_json(table, orient='split') for table in sample['tables']]
answer = pd.read_json(sample['answer'], orient='split')
# flattened input/output
input_to_model = sample["source"]
target = sample["target"]
```
# BibTeX entry and citation info
```
@inproceedings{pal-etal-2023-multitabqa,
title = "{M}ulti{T}ab{QA}: Generating Tabular Answers for Multi-Table Question Answering",
author = "Pal, Vaishali and
Yates, Andrew and
Kanoulas, Evangelos and
de Rijke, Maarten",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.348",
doi = "10.18653/v1/2023.acl-long.348",
pages = "6322--6334",
abstract = "Recent advances in tabular question answering (QA) with large language models are constrained in their coverage and only answer questions over a single table. However, real-world queries are complex in nature, often over multiple tables in a relational database or web page. Single table questions do not involve common table operations such as set operations, Cartesian products (joins), or nested queries. Furthermore, multi-table operations often result in a tabular output, which necessitates table generation capabilities of tabular QA models. To fill this gap, we propose a new task of answering questions over multiple tables. Our model, MultiTabQA, not only answers questions over multiple tables, but also generalizes to generate tabular answers. To enable effective training, we build a pre-training dataset comprising of 132,645 SQL queries and tabular answers. Further, we evaluate the generated tables by introducing table-specific metrics of varying strictness assessing various levels of granularity of the table structure. MultiTabQA outperforms state-of-the-art single table QA models adapted to a multi-table QA setting by finetuning on three datasets: Spider, Atis and GeoQuery.",
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |