Datasets:

Modalities:
Image
Languages:
English
ArXiv:
License:
TableEval / README.md
katebor's picture
update citation info and link to paper
4170229 verified
metadata
license: mit
viewer: true
task_categories:
  - table-question-answering
  - table-to-text
language:
  - en
pretty_name: TableEval
configs:
  - config_name: default
    data_files:
      - split: comtqa_fin
        path: ComTQA/FinTabNet/comtqa_fintabnet.json
      - split: comtqa_pmc
        path: ComTQA/PubTab1M/comtqa_pubtab1m.json
      - split: logic2text
        path: Logic2Text/logic2text.json
      - split: logicnlg
        path: LogicNLG/logicnlg.json
      - split: scigen
        path: SciGen/scigen.json
      - split: numericnlg
        path: numericNLG/numericnlg.json
size_categories:
  - 1K<n<10K

TableEval dataset

GitHub ACL arXiv

TableEval is developed to benchmark and compare the performance of (M)LLMs on tables from scientific vs. non-scientific sources, represented as images vs. text. It comprises six data subsets derived from the test sets of existing benchmarks for question answering (QA) and table-to-text (T2T) tasks, containing a total of 3017 tables and 11312 instances. The scienfific subset includes tables from pre-prints and peer-reviewed scholarly publications, while the non-scientific subset involves tables from Wikipedia and financial reports. Each table is available as a PNG image and in four textual formats: HTML, XML, LaTeX, and Dictionary (Dict). All task annotations are taken from the source datasets. Please, refer to the Table Understanding and (Multimodal) LLMs: A Cross-Domain Case Study on Scientific vs. Non-Scientific Data paper for more details.

Overview and statistics

Dataset Task Source Image Dict LaTeX HTML XML
ComTQA (PubTables-1M) VQA PubMed Central ⬇️ βš™οΈ βš™οΈ βš™οΈ πŸ“„
numericNLG T2T ACL Anthology πŸ“„ ⬇️ βš™οΈ ⬇️ βš™οΈ
SciGen T2T arXiv and ACL Anthology πŸ“„ ⬇️ πŸ“„ βš™οΈ βš™οΈ
ComTQA (FinTabNet) VQA Earnings reports of S&P 500 companies πŸ“„ βš™οΈ βš™οΈ βš™οΈ βš™οΈ
LogicNLG T2T Wikipedia βš™οΈ ⬇️ βš™οΈ πŸ“„ βš™οΈ
Logic2Text T2T Wikipedia βš™οΈ ⬇️ βš™οΈ πŸ“„ βš™οΈ

**Symbol ⬇️ indicates formats already available in the given corpus, while πŸ“„ and βš™οΈ denote formats extracted from the table source files (e. g., article PDF, Wikipedia page) and generated from other formats in this study, respectively.

Number of tables per format and data subset

Dataset Image Dict LaTeX HTML XML
ComTQA (PubTables-1M) 932 932 932 932 932
numericNLG 135 135 135 135 135
SciGen 1035 1035 928 985 961
ComTQA (FinTabNet) 659 659 659 659 659
LogicNLG 184 184 184 184 184
Logic2Text 72 72 72 72 72
Total 3017 3017 2910 2967 2943

Total number of instances per format and data subset

Dataset Image Dict LaTeX HTML XML
ComTQA (PubTables-1M) 6232 6232 6232 6232 6232
numericNLG 135 135 135 135 135
SciGen 1035 1035 928 985 961
ComTQA (FinTabNet) 2838 2838 2838 2838 2838
LogicNLG 917 917 917 917 917
Logic2Text 155 155 155 155 155
Total 11312 11312 11205 11262 11238

Structure

β”œβ”€β”€ ComTQA               
β”‚   β”œβ”€β”€ FinTabNet 
β”‚   β”‚   β”œβ”€β”€ comtqa_fintabnet.json
β”‚   β”‚   β”œβ”€β”€ comtqa_fintabnet_imgs.zip
β”‚   β”œβ”€β”€ PubTab1M     
β”‚   β”‚   β”œβ”€β”€ comtqa_pubtab1m.json
β”‚   β”‚   β”œβ”€β”€ comtqa_pubtab1m_imgs.zip
β”‚   β”œβ”€β”€ Logic2Text   
β”‚   β”‚   β”œβ”€β”€ logic2text.json
β”‚   β”‚   β”œβ”€β”€ logic2text_imgs.zip
β”‚   β”œβ”€β”€ LogicNLG   
β”‚   β”‚   β”œβ”€β”€ logicnlg.json
β”‚   β”‚   β”œβ”€β”€ logicnlg_imgs.zip
β”‚   β”œβ”€β”€ SciGen  
β”‚   β”‚   β”œβ”€β”€ scigen.json
β”‚   β”‚   β”œβ”€β”€ scigen_imgs.zip
β”‚   β”œβ”€β”€ numericNLG  
β”‚   β”‚   β”œβ”€β”€ numericnlg.json
└── └── └── numericnlg_imgs.zip

For more details on each subset, please, refer to the respective README.md files: ComTQA, Logic2Text, LogicNLG, SciGen, numericNLG.

Citation

@inproceedings{borisova-etal-2025-table,
    title = "Table Understanding and (Multimodal) {LLM}s: A Cross-Domain Case Study on Scientific vs. Non-Scientific Data",
    author = {Borisova, Ekaterina  and
      Barth, Fabio  and
      Feldhus, Nils  and
      Abu Ahmad, Raia  and
      Ostendorff, Malte  and
      Ortiz Suarez, Pedro  and
      Rehm, Georg  and
      M{\"o}ller, Sebastian},
    editor = "Chang, Shuaichen  and
      Hulsebos, Madelon  and
      Liu, Qian  and
      Chen, Wenhu  and
      Sun, Huan",
    booktitle = "Proceedings of the 4th Table Representation Learning Workshop",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.trl-1.10/",
    pages = "109--142",
    ISBN = "979-8-89176-268-8",
    abstract = "Tables are among the most widely used tools for representing structured data in research, business, medicine, and education. Although LLMs demonstrate strong performance in downstream tasks, their efficiency in processing tabular data remains underexplored. In this paper, we investigate the effectiveness of both text-based and multimodal LLMs on table understanding tasks through a cross-domain and cross-modality evaluation. Specifically, we compare their performance on tables from scientific vs. non-scientific contexts and examine their robustness on tables represented as images vs. text. Additionally, we conduct an interpretability analysis to measure context usage and input relevance. We also introduce the TableEval benchmark, comprising 3017 tables from scholarly publications, Wikipedia, and financial reports, where each table is provided in five different formats: Image, Dictionary, HTML, XML, and LaTeX. Our findings indicate that while LLMs maintain robustness across table modalities, they face significant challenges when processing scientific tables."
}

Funding

This work has received funding through the DFG project NFDI4DS (no. 460234259).

drawing