|
--- |
|
language: en |
|
tags: |
|
- multitabqa |
|
- multi-table-question-answering |
|
license: mit |
|
pipeline_tag: table-question-answering |
|
--- |
|
|
|
# MultiTabQA (base-sized model) |
|
|
|
MultiTabQA was proposed in [MultiTabQA: Generating Tabular Answers for Multi-Table Question Answering](https://arxiv.org/abs/2305.12820) by Vaishali Pal, Andrew Yates, Evangelos Kanoulas, Maarten de Rijke. The original repo can be found [here](https://github.com/kolk/MultiTabQA). |
|
|
|
## Model description |
|
|
|
MultiTabQA is a tableQA model which generates the answer table from multiple-input tables. It can handle multi-table operators such as UNION, INTERSECT, EXCEPT, JOINS, etc. |
|
|
|
MultiTabQA is based on the TAPEX(BART) architecture, which is a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. |
|
|
|
## Intended Uses |
|
|
|
You can use the raw model SQL execution over multiple input tables. The model has been finetuned on the GeoQuery dataset where it answers natural language questions over multiple input tables. |
|
|
|
### How to Use |
|
|
|
Here is how to use this model in transformers: |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM |
|
import pandas as pd |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("vaishali/multitabqa-base-geoquery") |
|
model = AutoModelForSeq2SeqLM.from_pretrained("vaishali/multitabqa-base-geoquery") |
|
|
|
question = "How many departments are led by heads who are not mentioned?" |
|
table_names = ['department', 'management'] |
|
tables=[{"columns":["Department_ID","Name","Creation","Ranking","Budget_in_Billions","Num_Employees"], |
|
"index":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14], |
|
"data":[ |
|
[1,"State","1789",1,9.96,30266.0], |
|
[2,"Treasury","1789",2,11.1,115897.0], |
|
[3,"Defense","1947",3,439.3,3000000.0], |
|
[4,"Justice","1870",4,23.4,112557.0], |
|
[5,"Interior","1849",5,10.7,71436.0], |
|
[6,"Agriculture","1889",6,77.6,109832.0], |
|
[7,"Commerce","1903",7,6.2,36000.0], |
|
[8,"Labor","1913",8,59.7,17347.0], |
|
[9,"Health and Human Services","1953",9,543.2,67000.0], |
|
[10,"Housing and Urban Development","1965",10,46.2,10600.0], |
|
[11,"Transportation","1966",11,58.0,58622.0], |
|
[12,"Energy","1977",12,21.5,116100.0], |
|
[13,"Education","1979",13,62.8,4487.0], |
|
[14,"Veterans Affairs","1989",14,73.2,235000.0], |
|
[15,"Homeland Security","2002",15,44.6,208000.0] |
|
] |
|
}, |
|
{"columns":["department_ID","head_ID","temporary_acting"], |
|
"index":[0,1,2,3,4], |
|
"data":[ |
|
[2,5,"Yes"], |
|
[15,4,"Yes"], |
|
[2,6,"Yes"], |
|
[7,3,"No"], |
|
[11,10,"No"] |
|
] |
|
}] |
|
|
|
input_tables = [pd.read_json(table, orient="split") for table in tables] |
|
|
|
# flatten the model inputs in the format: query + " " + <table_name> : table_name1 + flattened_table1 + <table_name> : table_name2 + flattened_table2 + ... |
|
#flattened_input = question + " " + [f"<table_name> : {table_name} linearize_table(table) for table_name, table in zip(table_names, tables)] |
|
model_input_string = """How many departments are led by heads who are not mentioned? <table_name> : department col : Department_ID | Name | Creation | Ranking | Budget_in_Billions | Num_Employees row 1 : 1 | State | 1789 | 1 | 9.96 | 30266 row 2 : 2 | Treasury | 1789 | 2 | 11.1 | 115897 row 3 : 3 | Defense | 1947 | 3 | 439.3 | 3000000 row 4 : 4 | Justice | 1870 | 4 | 23.4 | 112557 row 5 : 5 | Interior | 1849 | 5 | 10.7 | 71436 row 6 : 6 | Agriculture | 1889 | 6 | 77.6 | 109832 row 7 : 7 | Commerce | 1903 | 7 | 6.2 | 36000 row 8 : 8 | Labor | 1913 | 8 | 59.7 | 17347 row 9 : 9 | Health and Human Services | 1953 | 9 | 543.2 | 67000 row 10 : 10 | Housing and Urban Development | 1965 | 10 | 46.2 | 10600 row 11 : 11 | Transportation | 1966 | 11 | 58.0 | 58622 row 12 : 12 | Energy | 1977 | 12 | 21.5 | 116100 row 13 : 13 | Education | 1979 | 13 | 62.8 | 4487 row 14 : 14 | Veterans Affairs | 1989 | 14 | 73.2 | 235000 row 15 : 15 | Homeland Security | 2002 | 15 | 44.6 | 208000 <table_name> : management col : department_ID | head_ID | temporary_acting row 1 : 2 | 5 | Yes row 2 : 15 | 4 | Yes row 3 : 2 | 6 | Yes row 4 : 7 | 3 | No row 5 : 11 | 10 | No""" |
|
inputs = tokenizer(model_input_string, return_tensors="pt") |
|
|
|
outputs = model.generate(**inputs) |
|
|
|
print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) |
|
# 'col : count(*) row 1 : 11' |
|
``` |
|
|
|
### How to Fine-tune |
|
|
|
Please find the fine-tuning script [here](https://github.com/kolk/MultiTabQA). |
|
|
|
### BibTeX entry and citation info |
|
|
|
```bibtex |
|
@misc{pal2023multitabqa, |
|
title={MultiTabQA: Generating Tabular Answers for Multi-Table Question Answering}, |
|
author={Vaishali Pal and Andrew Yates and Evangelos Kanoulas and Maarten de Rijke}, |
|
year={2023}, |
|
eprint={2305.12820}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |