DSBench: How Far Are Data Science Agents to Becoming Data Science Experts?
Abstract
Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) have demonstrated impressive language/vision reasoning abilities, igniting the recent trend of building agents for targeted applications such as shopping assistants or AI software engineers. Recently, many data science benchmarks have been proposed to investigate their performance in the data science domain. However, existing data science benchmarks still fall short when compared to real-world data science applications due to their simplified settings. To bridge this gap, we introduce DSBench, a comprehensive benchmark designed to evaluate data science agents with realistic tasks. This benchmark includes 466 data analysis tasks and 74 data modeling tasks, sourced from Eloquence and Kaggle competitions. DSBench offers a realistic setting by encompassing long contexts, multimodal task backgrounds, reasoning with large data files and multi-table structures, and performing end-to-end data modeling tasks. Our evaluation of state-of-the-art LLMs, LVLMs, and agents shows that they struggle with most tasks, with the best agent solving only 34.12% of data analysis tasks and achieving a 34.74% Relative Performance Gap (RPG). These findings underscore the need for further advancements in developing more practical, intelligent, and autonomous data science agents.
Community
How far are data science agents to becoming data science experts? Our brand new data science benchmark comes with a comprehensive evaluation.
Code and data released at GitHub: https://github.com/LiqiangJing/DSBench
Hi @xywang1 congrats on this work and thanks for making the dataset available on the hub!
Note that for things like the dataset viewer to work, one would need to follow this guide: https://huggingface.co/docs/datasets/loading, after which you can call dataset.push_to_hub(...)
to push it to your repo. This also makes the dataset usable through the Datasets library.
Cheers!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- BLADE: Benchmarking Language Model Agents for Data-Driven Science (2024)
- MMAU: A Holistic Benchmark of Agent Capabilities Across Diverse Domains (2024)
- PyBench: Evaluating LLM Agent on various real-world coding tasks (2024)
- Structured Event Reasoning with Large Language Models (2024)
- Can We Rely on LLM Agents to Draft Long-Horizon Plans? Let's Take TravelPlanner as an Example (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
great paper!It is a groundbreaking and pioneering article
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper