--- configs: - config_name: video_perspective data_files: video_perspective.json - config_name: question_perspective data_files: question_perspective.json - config_name: train data_files: train.json license: cc-by-nc-sa-4.0 ---

A Comprehensive Benchmark for Fine-Grained Video Motion Understanding

[![arXiv](https://img.shields.io/badge/cs.CV-2503.xxxxx-b31b1b?logo=arxiv&logoColor=red)](https://arxiv.org) [![GitHub](https://img.shields.io/badge/GitHub-FAVOR--Bench-blue?logo=github)](https://github.com/FAVOR-Bench/FAVOR-Bench.github.io) [![Static Badge](https://img.shields.io/badge/website-FAVOR--Bench-8A2BE2)](https://favor-bench.github.io/)
--- ## 🔥 News * **`2025.03.16`** 🌟 We released Favor-Bench, a new benchmark for fine-grained video motion understanding! ## Introduction Multimodal Large Language Models (MLLMs) have shown remarkable capabilities in video content understanding but still struggle with fine-grained motion comprehension. To comprehensively assess the motion understanding ability of existing MLLMs, we introduce FAVOR-Bench, comprising 1,776 videos with structured manual annotations of various motions. Our benchmark includes both close-ended and open-ended tasks. For close-ended evaluation, we carefully design 8,184 multiple-choice question-answer pairs spanning six distinct sub-tasks. For open-ended evaluation, we develop both a novel cost-efficient LLM-free and a GPT-assisted caption assessment method, where the former can enhance benchmarking interpretability and reproducibility. Comprehensive experiments with 21 state-of-the-art MLLMs reveal significant limitations in their ability to comprehend and describe detailed temporal dynamics in video motions. To alleviate this limitation, we further build FAVOR-Train, a dataset consisting of 17,279 videos with fine-grained motion annotations. The results of finetuning Qwen2.5-VL on FAVOR-Train yield consistent improvements on motion-related tasks of TVBench, MotionBench and our FAVOR-Bench. Comprehensive assessment results demonstrate that the proposed FAVOR-Bench and FAVOR-Train provide valuable tools to the community for developing more powerful video understanding models. ### Evaluation Tasks

## Dataset ### License Our dataset is under the CC-BY-NC-SA-4.0 license. FAVOR-Bench is only used for academic research. Commercial use in any form is prohibited. We do not own the copyright of any raw video files. If there is any infringement in FAVOR-Bench, please contact zhangl22@m.fudan.edu.cn or directly raise an issue, and we will remove it immediately. ### FAVOR-Bench Videos We provide all self-collected video clips from TV series and animations in this space. For publically available videos, you could download them from the original address: ``` 1. Charades: https://prior.allenai.org/projects/charades 2. EgoTaskQA: https://sites.google.com/view/egotaskqa ``` ### FAVOR-Train Videos For videos originated from Koala36M, we provide their Youtube links and start&end time. You could download them with tools like `yt-dlp`. For publically available videos, you could download them from the original address: ``` 1. Charades-ego: https://prior.allenai.org/projects/charades-ego 2. EgoTaskQA: https://sites.google.com/view/egotaskqa 3. EgoExoLearn: https://huggingface.co/datasets/hyf015/EgoExoLearn 4. EgoExo4D: https://ego-exo4d-data.org/ ``` ### JSON Files For FAVOR-Bench, we provide both question-perspective and video-perspective dicts. In the video-perspective file, each entry represents one video and we provide caption, camera motion, subject attributes, motion list, chronological motion list and all questions (question, options, correct answer, task type). In question perspective, each entry represents a single question, including question, options, correct answer, task type, and the corresponding video name. ## 📈 Results - **Model Comparision:**

- **Benchmark Comparison:**

- **Benchmark Statistics:**

Data statistics of FAVOR-Bench. Left: Task type distribution across close-ended and open-ended evaluation in FAVOR-Bench. Middle: Distribution of motion numbers (motion sequence length) per video. Right: The word cloud statistics of motion vocabularies in FAVOR-Bench.

More data statistics of FAVOR-Bench. Left: Index distribution of correct answers for the close-ended tasks. For example, "(1)" indicates that the correct option is ranked first. Middle: Video duration distribution of FAVOR-Bench. Right: Question number distribution for videos of FAVOR-Bench. ## Citation If you find our work helpful for your research, please consider citing our work. ```bibtex @misc{tu2025favor, title={FAVOR-Bench: A Comprehensive Benchmark for Fine-Grained Video Motion Understanding}, author={Chongjun Tu and Lin Zhang and Pengtao Chen and Peng Ye and Xianfang Zeng and Wei Cheng and Gang Yu and Tao Chen}, year={2025}, eprint={coming soon}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```