Papers
arxiv:2409.07703

DSBench: How Far Are Data Science Agents to Becoming Data Science Experts?

Published on Sep 12
· Submitted by xywang1 on Sep 13
#1 Paper of the day

Abstract

Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) have demonstrated impressive language/vision reasoning abilities, igniting the recent trend of building agents for targeted applications such as shopping assistants or AI software engineers. Recently, many data science benchmarks have been proposed to investigate their performance in the data science domain. However, existing data science benchmarks still fall short when compared to real-world data science applications due to their simplified settings. To bridge this gap, we introduce DSBench, a comprehensive benchmark designed to evaluate data science agents with realistic tasks. This benchmark includes 466 data analysis tasks and 74 data modeling tasks, sourced from Eloquence and Kaggle competitions. DSBench offers a realistic setting by encompassing long contexts, multimodal task backgrounds, reasoning with large data files and multi-table structures, and performing end-to-end data modeling tasks. Our evaluation of state-of-the-art LLMs, LVLMs, and agents shows that they struggle with most tasks, with the best agent solving only 34.12% of data analysis tasks and achieving a 34.74% Relative Performance Gap (RPG). These findings underscore the need for further advancements in developing more practical, intelligent, and autonomous data science agents.

Community

Paper author Paper submitter

How far are data science agents to becoming data science experts? Our brand new data science benchmark comes with a comprehensive evaluation.

Code and data released at GitHub: https://github.com/LiqiangJing/DSBench

·

Hi @xywang1 congrats on this work and thanks for making the dataset available on the hub!

Note that for things like the dataset viewer to work, one would need to follow this guide: https://huggingface.co/docs/datasets/loading, after which you can call dataset.push_to_hub(...) to push it to your repo. This also makes the dataset usable through the Datasets library.

Cheers!

Paper author Paper submitter

An overall workflow of our proposed DSBench benchmark.
overview.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

great paper!It is a groundbreaking and pioneering article

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2409.07703 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2409.07703 in a Space README.md to link it from this page.

Collections including this paper 11