The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ValueError Message: Failed to convert pandas DataFrame to Arrow Table from file hf://datasets/meituan-longcat/VitaBench@b7b05b94d3f685a0fd33acb188a60668680d41af/cross_domain/tasks.json. Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head return next(iter(self.iter(batch_size=n))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter for key, example in iterator: ^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__ for key, pa_table in self._iter_arrow(): ^^^^^^^^^^^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow yield from self.ex_iterable._iter_arrow() File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow for key, pa_table in iterator: ^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow for key, pa_table in self.generate_tables_fn(**gen_kwags): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 186, in _generate_tables raise ValueError( ValueError: Failed to convert pandas DataFrame to Arrow Table from file hf://datasets/meituan-longcat/VitaBench@b7b05b94d3f685a0fd33acb188a60668680d41af/cross_domain/tasks.json.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
π±VitaBench: Benchmarking LLM Agents
with Versatile Interactive Tasks
π Paper β’ π Website β’ π Leaderboard β’ π οΈ Code β’ π€ Dataset
π News
- [2025-10] Our paper is released on arXiv: VitaBench: Benchmarking LLM Agents with Versatile Interactive Tasks in Real-world Applications
- [2025-10] The VitaBench suite is released, including the codebase, dataset and evaluation pipeline! If you have any questions, feel free to raise issues and/or submit pull requests for new features of bug fixes.
π Introduction
In this paper, we introduce VitaBench, a challenging benchmark that evaluates agents on versatile interactive tasks grounded in real-world settings. Drawing from daily applications in food delivery, in-store consumption, and online travel services, VitaBench presents agents with the most complex life-serving simulation environment to date, comprising 66 tools. Through a framework that eliminates domain-specific policies, we enable flexible composition of these scenarios and tools, yielding 100 cross-scenario tasks (main results) and 300 single-scenario tasks. Each task is derived from multiple real user requests and requires agents to reason across temporal and spatial dimensions, utilize complex tool sets, proactively clarify ambiguous instructions, and track shifting user intent throughout multi-turn conversations.
Moreover, we propose a rubric-based sliding window evaluator, enabling robust assessment of diverse solution pathways in complex environments and stochastic interactions. Our comprehensive evaluation reveals that even the most advanced models achieve only 30% success rate on cross-scenario tasks, and less than 50% success rate on others. Overall, we believe VitaBench will serve as a valuable resource for advancing the development of AI agents in practical real-world applications.
The name βVitaβ derives from the Latin word for βLifeβ, reflecting our focus on life-serving applications.
π± Benchmark Details
VitaBench provides an evaluation framework that supports model evaluations on both single-domain and cross-domain tasks through flexible configuration. For cross-domain evaluation, simply connect multiple domain names with commasβthis will automatically merge the environments of the specified domains into a unified environment.
Statistics of databases and environments:
Cross-Scenarios (All domains) |
Delivery | In-store | OTA | |
---|---|---|---|---|
Databases | ||||
Service Providers | 1,324 | 410 | 611 | 1,437 |
Products | 6,946 | 788 | 3,277 | 9,693 |
Transactions | 447 | 48 | 28 | 154 |
API Tools | ||||
Write | 27 | 4 | 9 | 14 |
Read | 33 | 10 | 10 | 19 |
General | 6 | 6 | 5 | 5 |
Tasks | 100 | 100 | 100 | 100 |
π οΈ Environment
VitaBench provides an evaluation framework that supports model evaluations on both single-domain and cross-domain tasks through flexible configuration. For cross-domain evaluation, simply connect multiple domain names with commasβthis will automatically merge the environments of the specified domains into a unified environment. Please visit our GitHub repository vitabench for more detailed instructions.
π Citation
If you find our work helpful or relevant to your research, please kindly cite our paper:
@article{he2025vitabench,
title={VitaBench: Benchmarking LLM Agents with Versatile Interactive Tasks in Real-world Applications},
author={He, Wei and Sun, Yueqing and Hao, Hongyan and Hao, Xueyuan and Xia, Zhikang and Gu, Qi and Han, Chengcheng and Zhao, Dengchang and Su, Hui and Zhang, Kefeng and Gao, Man and Su, Xi and Cai, Xiaodong and Cai, Xunliang and Yang, Yu and Zhao, Yunke},
journal={arXiv preprint arXiv:2509.26490},
year={2025}
}
π License
This project is licensed under the MIT License - see the LICENSE file for details.
- Downloads last month
- 685