VisionArena: 230K Real World User-VLM Conversations with Preference Labels
Abstract
With the growing adoption and capabilities of vision-language models (VLMs) comes the need for benchmarks that capture authentic user-VLM interactions. In response, we create VisionArena, a dataset of 230K real-world conversations between users and VLMs. Collected from Chatbot Arena - an open-source platform where users interact with VLMs and submit preference votes - VisionArena spans 73K unique users, 45 VLMs, and 138 languages. Our dataset contains three subsets: VisionArena-Chat, 200k single and multi-turn conversations between a user and a VLM; VisionArena-Battle, 30K conversations comparing two anonymous VLMs with user preference votes; and VisionArena-Bench, an automatic benchmark of 500 diverse user prompts that efficiently approximate the live Chatbot Arena model rankings. Additionally, we highlight the types of question asked by users, the influence of response style on preference, and areas where models often fail. We find open-ended tasks like captioning and humor are highly style-dependent, and current VLMs struggle with spatial reasoning and planning tasks. Lastly, we show finetuning the same base model on VisionArena-Chat outperforms Llava-Instruct-158K, with a 17-point gain on MMMU and a 46-point gain on the WildVision benchmark. Dataset at https://huggingface.co/lmarena-ai
Community
Collected from Chatbot Arena, VisionArena consists of 230k user-VLM conversations which span 73K users, 45 VLMs, and 138 languages.
A breakdown of Vision Arena's 3 datasets 🧵
1️⃣ VisionArena-Chat: 200K real convos between users & VLMs
2️⃣ VisionArena-Battle: 30K blind A/B tests where users pick their preferred model's response
3️⃣ VisionArena-Bench: 500 carefully selected prompts that predict arena rankings
Checkout these datasets and more on HuggingFace!
Great work ! I noticed the dataset isn’t available via the current link. Could you share when it might be released? Looking forward to your updates!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale (2024)
- LLaVA-CoT: Let Vision Language Models Reason Step-by-Step (2024)
- VLRewardBench: A Challenging Benchmark for Vision-Language Generative Reward Models (2024)
- Visual Contexts Clarify Ambiguous Expressions: A Benchmark Dataset (2024)
- MC-Bench: A Benchmark for Multi-Context Visual Grounding in the Era of MLLMs (2024)
- Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling (2024)
- Synthetic Vision: Training Vision-Language Models to Understand Physics (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper