Typo in leaderboard

#1
by iyuge2 - opened

Hello, thank you for your excellent work. I noticed a small typo in Table 2: the value of MMStar for GLM-4.5V should be 75.6 according to the original paper, rather than 72.9, which corresponds to GLM-4.1V-9B-Thinking. Thank you again for your valuable contribution.

Hi, thank you for your great work. We are the research team of Z.ai, we notice that in Table 2 you reported the results in LVBench [1], while the results are actually the performance of LongVideoBench [2], which is also reported in your blog's 'Video Understanding' part (https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B#video-understanding). These are two different benchmarks and we used LVBench [1] in our report.

Could you double-check whether the performance in Table 2 is from LVBench [1] or LongVideoBench [2] and solve the inconsistency in Table 2 and the 'Video Understanding' section?

[1] LVBench
@misc {wang2025lvbenchextremelongvideo,
title={LVBench: An Extreme Long Video Understanding Benchmark},
author={Weihan Wang and Zehai He and Wenyi Hong and Yean Cheng and Xiaohan Zhang and Ji Qi and Xiaotao Gu and Shiyu Huang and Bin Xu and Yuxiao Dong and Ming Ding and Jie Tang},
year={2025},
eprint={2406.08035},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2406.08035},
}

[2] LongVideoBench
@misc {wu2024longvideobenchbenchmarklongcontextinterleaved,
title={LongVideoBench: A Benchmark for Long-context Interleaved Video-Language Understanding},
author={Haoning Wu and Dongxu Li and Bei Chen and Junnan Li},
year={2024},
eprint={2407.15754},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.15754},
}

OpenGVLab org

Thank you for your suggestion. We will correct these typos in the revision and release a unified update next week.

Sign up or log in to comment