|
|
--- |
|
|
license: mit |
|
|
viewer: true |
|
|
task_categories: |
|
|
- table-question-answering |
|
|
- table-to-text |
|
|
language: |
|
|
- en |
|
|
pretty_name: TableEval |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: comtqa_fin |
|
|
path: ComTQA/FinTabNet/comtqa_fintabnet.json |
|
|
- split: comtqa_pmc |
|
|
path: ComTQA/PubTab1M/comtqa_pubtab1m.json |
|
|
- split: logic2text |
|
|
path: Logic2Text/logic2text.json |
|
|
- split: logicnlg |
|
|
path: LogicNLG/logicnlg.json |
|
|
- split: scigen |
|
|
path: SciGen/scigen.json |
|
|
- split: numericnlg |
|
|
path: numericNLG/numericnlg.json |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
--- |
|
|
|
|
|
# TableEval dataset |
|
|
|
|
|
[](https://github.com/esborisova/TableEval-Study) |
|
|
[](https://aclanthology.org/2025.trl-1.10/) |
|
|
[](https://arxiv.org/abs/2507.00152) |
|
|
|
|
|
|
|
|
**TableEval** is developed to benchmark and compare the performance of (M)LLMs on tables from scientific vs. non-scientific sources, represented as images vs. text. |
|
|
It comprises six data subsets derived from the test sets of existing benchmarks for question answering (QA) and table-to-text (T2T) tasks, containing a total of **3017 tables** and **11312 instances**. |
|
|
The scienfific subset includes tables from pre-prints and peer-reviewed scholarly publications, while the non-scientific subset involves tables from Wikipedia and financial reports. |
|
|
Each table is available as a **PNG** image and in four textual formats: **HTML**, **XML**, **LaTeX**, and **Dictionary (Dict)**. |
|
|
All task annotations are taken from the source datasets. Please, refer to the [Table Understanding and (Multimodal) LLMs: A Cross-Domain Case Study on Scientific vs. Non-Scientific Data](https://aclanthology.org/2025.trl-1.10/) paper for more details. |
|
|
|
|
|
|
|
|
## Overview and statistics |
|
|
|
|
|
|
|
|
| Dataset | Task | Source | Image | Dict | LaTeX | HTML | XML | |
|
|
|-----------------------|--------------------|-------------------|---------------|---------------|---------------|---------------|---------------| |
|
|
| ComTQA (PubTables-1M) <img src='https://img.shields.io/badge/arXiv-2024-red'> <a href='https://arxiv.org/abs/2406.01326'><img src='https://img.shields.io/badge/PDF-blue'></a> <a href='https://huggingface.co/datasets/ByteDance/ComTQA'><img src='https://img.shields.io/badge/Dataset-gold'> | VQA | PubMed Central | β¬οΈ | βοΈ | βοΈ | βοΈ | π | |
|
|
| numericNLG <img src='https://img.shields.io/badge/ACL-2021-red'> <a href='https://aclanthology.org/2021.acl-long.115.pdf'><img src='https://img.shields.io/badge/PDF-blue'></a> <a href='https://huggingface.co/datasets/kasnerz/numericnlg?row=0'><img src='https://img.shields.io/badge/Dataset-gold'></a> | T2T | ACL Anthology | π | β¬οΈ | βοΈ | β¬οΈ | βοΈ | |
|
|
| SciGen <img src='https://img.shields.io/badge/arXiv-2021-red'> <a href='https://arxiv.org/abs/2104.08296'><img src='https://img.shields.io/badge/PDF-blue'></a> <a href='https://github.com/UKPLab/SciGen/tree/main'><img src='https://img.shields.io/badge/Dataset-gold'></a> | T2T | arXiv and ACL Anthology| π | β¬οΈ | π | βοΈ | βοΈ | |
|
|
| ComTQA (FinTabNet) <img src='https://img.shields.io/badge/arXiv-2024-red'> <a href='https://arxiv.org/abs/2406.01326'><img src='https://img.shields.io/badge/PDF-blue'></a> <a href='https://huggingface.co/datasets/ByteDance/ComTQA'><img src='https://img.shields.io/badge/Dataset-gold'> | VQA | Earnings reports of S&P 500 companies | π | βοΈ | βοΈ | βοΈ | βοΈ | |
|
|
| LogicNLG <img src='https://img.shields.io/badge/ACL-2020-red'> <a href='https://aclanthology.org/2020.acl-main.708/'><img src='https://img.shields.io/badge/PDF-blue'></a> <a href='https://huggingface.co/datasets/kasnerz/logicnlg'><img src='https://img.shields.io/badge/Dataset-gold'></a> | T2T | Wikipedia | βοΈ | β¬οΈ | βοΈ | π | βοΈ | |
|
|
| Logic2Text <img src='https://img.shields.io/badge/ACL-2020-red'> <a href='https://aclanthology.org/2020.findings-emnlp.190/'><img src='https://img.shields.io/badge/PDF-blue'></a> <a href='https://huggingface.co/datasets/kasnerz/logic2text'><img src='https://img.shields.io/badge/Dataset-gold'></a> | T2T | Wikipedia | βοΈ | β¬οΈ | βοΈ | π | βοΈ | |
|
|
|
|
|
**Symbol β¬οΈ indicates formats already available in the given corpus, while π and βοΈ denote formats extracted from the table source files (e. g., article PDF, Wikipedia page) and generated from other formats in this study, respectively. |
|
|
|
|
|
#### Number of tables per format and data subset |
|
|
|
|
|
| Dataset | Image | Dict | LaTeX | HTML | XML | |
|
|
|------------------------- |--------------------|-------------------|---------------|---------------|---------------| |
|
|
| ComTQA (PubTables-1M) | 932 | 932 | 932 | 932 | 932 | |
|
|
| numericNLG | 135 | 135 | 135 | 135 | 135 | |
|
|
| SciGen | 1035 | 1035 | 928 | 985 | 961 | |
|
|
| ComTQA (FinTabNet) | 659 | 659 | 659 | 659 | 659 | |
|
|
| LogicNLG | 184 | 184 | 184 | 184 | 184 | |
|
|
| Logic2Text | 72 | 72 | 72 | 72 | 72 | |
|
|
| **Total** | **3017** | **3017** | **2910** | **2967** | **2943** | |
|
|
|
|
|
|
|
|
#### Total number of instances per format and data subset |
|
|
|
|
|
| Dataset | Image | Dict | LaTeX | HTML | XML | |
|
|
|------------------------- |--------------------|-------------------|---------------|---------------|---------------| |
|
|
| ComTQA (PubTables-1M) | 6232 | 6232 | 6232 | 6232 | 6232 | |
|
|
| numericNLG | 135 | 135 | 135 | 135 | 135 | |
|
|
| SciGen | 1035 | 1035 | 928 | 985 | 961 | |
|
|
| ComTQA (FinTabNet) | 2838 | 2838 | 2838 | 2838 | 2838 | |
|
|
| LogicNLG | 917 | 917 | 917 | 917 | 917 | |
|
|
| Logic2Text | 155 | 155 | 155 | 155 | 155 | |
|
|
| **Total** | **11312** | **11312** | **11205** | **11262** | **11238** | |
|
|
|
|
|
|
|
|
## Structure |
|
|
|
|
|
βββ ComTQA |
|
|
β βββ FinTabNet |
|
|
β β βββ comtqa_fintabnet.json |
|
|
β β βββ comtqa_fintabnet_imgs.zip |
|
|
β βββ PubTab1M |
|
|
β β βββ comtqa_pubtab1m.json |
|
|
β β βββ comtqa_pubtab1m_imgs.zip |
|
|
β βββ Logic2Text |
|
|
β β βββ logic2text.json |
|
|
β β βββ logic2text_imgs.zip |
|
|
β βββ LogicNLG |
|
|
β β βββ logicnlg.json |
|
|
β β βββ logicnlg_imgs.zip |
|
|
β βββ SciGen |
|
|
β β βββ scigen.json |
|
|
β β βββ scigen_imgs.zip |
|
|
β βββ numericNLG |
|
|
β β βββ numericnlg.json |
|
|
βββ βββ βββ numericnlg_imgs.zip |
|
|
|
|
|
For more details on each subset, please, refer to the respective README.md files: [ComTQA](ComTQA/README.md), [Logic2Text](Logic2Text/README.md), [LogicNLG](LogicNLG/README.md), [SciGen](SciGen/README.md), [numericNLG](numericNLG/README.md). |
|
|
|
|
|
## Citation |
|
|
|
|
|
``` |
|
|
@inproceedings{borisova-etal-2025-table, |
|
|
title = "Table Understanding and (Multimodal) {LLM}s: A Cross-Domain Case Study on Scientific vs. Non-Scientific Data", |
|
|
author = {Borisova, Ekaterina and |
|
|
Barth, Fabio and |
|
|
Feldhus, Nils and |
|
|
Abu Ahmad, Raia and |
|
|
Ostendorff, Malte and |
|
|
Ortiz Suarez, Pedro and |
|
|
Rehm, Georg and |
|
|
M{\"o}ller, Sebastian}, |
|
|
editor = "Chang, Shuaichen and |
|
|
Hulsebos, Madelon and |
|
|
Liu, Qian and |
|
|
Chen, Wenhu and |
|
|
Sun, Huan", |
|
|
booktitle = "Proceedings of the 4th Table Representation Learning Workshop", |
|
|
month = jul, |
|
|
year = "2025", |
|
|
address = "Vienna, Austria", |
|
|
publisher = "Association for Computational Linguistics", |
|
|
url = "https://aclanthology.org/2025.trl-1.10/", |
|
|
pages = "109--142", |
|
|
ISBN = "979-8-89176-268-8", |
|
|
abstract = "Tables are among the most widely used tools for representing structured data in research, business, medicine, and education. Although LLMs demonstrate strong performance in downstream tasks, their efficiency in processing tabular data remains underexplored. In this paper, we investigate the effectiveness of both text-based and multimodal LLMs on table understanding tasks through a cross-domain and cross-modality evaluation. Specifically, we compare their performance on tables from scientific vs. non-scientific contexts and examine their robustness on tables represented as images vs. text. Additionally, we conduct an interpretability analysis to measure context usage and input relevance. We also introduce the TableEval benchmark, comprising 3017 tables from scholarly publications, Wikipedia, and financial reports, where each table is provided in five different formats: Image, Dictionary, HTML, XML, and LaTeX. Our findings indicate that while LLMs maintain robustness across table modalities, they face significant challenges when processing scientific tables." |
|
|
} |
|
|
``` |
|
|
|
|
|
## Funding |
|
|
|
|
|
This work has received funding through the DFG project [NFDI4DS](https://www.nfdi4datascience.de) (no. 460234259). |
|
|
|
|
|
<div style="position: relative; width: 100%;"> |
|
|
<img src="NFDI4DS.png" alt="drawing" width="200" style="position: absolute; bottom: 0; right: 0;"/> |
|
|
</div> |