V2PE-Data / README.md
dreamerlin's picture
Update README.md
a680bab verified
|
raw
history blame
7.64 kB
metadata
license: mit
task_categories:
  - visual-question-answering
  - question-answering
language:
  - en
pretty_name: V2PE-Data
size_categories:
  - 100B<n<1T

V2PE-Data

[๐Ÿ“‚ GitHub] [๐Ÿ†• Blog] [๐Ÿ“œ Paper] [๐Ÿค— HF Models]

image.png

Summary

We introduce two augmented long-context multimodal datasets: Long Visual Question Answering and Long multimodal Retrieval. These datasets aim to enhance VLMs' long-context training and establish a systematic evaluation framework, thereby addressing the challenges associated with long-context understanding that extend beyond the scope of existing training data.

image.png

  • Long Visual Question Answering (Long-VQA): The Long-VQA dataset aims to evaluate the capabilities of VLMs in understanding and reasoning over long multimodal sequences within general visual question-answering tasks. We extended 17 widely adopted datasets (e.g., DocVQA, GQA, SQA), expanding their content from short sequences to those containing up to 32K tokens. The tasks involve answering questions that require commonsense reasoning, factual knowledge, and interpretation of visual information from charts, documents, and real-world texts. Long-VQA contains 533K samples: 392K for training (up to 32K tokens) and 141K for validation (up to 64K tokens) to evaluate the generalization to longer contexts.

image.png

  • Long Multimodal Retrieval (Long-MR): we developed Long-MR by inserting a target image or textual segment into sequences of interleaved images and texts. Long-MR evaluates VLMs' ability to retrieve specific targets from ultra-long multimodal sequences, requiring models to locate the inserted "needle" and answer associated questions. We generated two subsets of Long-MR: Long-MR-32K (488K samples, sequences up to 32K tokens) and Long-MR-256K (50K samples, sequences up to 256K tokens), following the data construction process of MM-NIAH. To assess the limits of VLMs' long-context capabilities, we further extend the official MM-NIAH evaluation benchmark by generating testing samples with sequence lengths ranging from 64K to 1M tokens, resulting in the MM-NIAH-1M benchmark. This extension pushes the testing capacity beyond the original MM-NIAH, which was limited to sequences of up to 64K tokens.

image.png

Please refer to our paper for more details.

Evaluation Results of Released Model

General MLLM Benchmarks

Model #Param ChartQA DocVQA AI2D InfoVQA SQA POPE MMMUval MMBenchEN SEEDI Avg
InternVL2-2B 2.0B 71.7 86.9 74.1 58.9 94.1 85.2 36.3 73.4 70.9 72.4
DeepSeek-VL-1.3B 2.0B 47.4 - 51.5 - 68.4 85.9 33.8 66.4 66.0 -
Qwen2-VL-2B 2.0B 73.5 90.1 74.7 65.5 - - 41.1 74.9 - -
Aquila-VL-2B 2.2B 32.0 85.0 75.1 58.3 95.1 83.1 46.9 79.0 73.9 69.8
MiniCPM-V-2 2.8B 55.6 71.9 62.9 - 80.7 86.3 38.2 64.1 67.1 -
Vintern-3B-beta 3.7B 68.3 - 69.1 - 75.0 87.4 46.7 70.6 70.0 -
Llama 3.2 11B 11B 83.4 88.4 91.1 - - - 50.7 68.0 - -
Qwen2-VL-72B 73B 88.3 96.5 88.1 84.5 91.2 87.2 64.5 86.9 77.9 85.0
GPT-4o - 85.7 92.8 84.7 - 90.1 97.2 69.1 82.1 76.7 -
InternVL2-V2PE-32K 2.0B 76.4 83.9 73.2 55.9 94.9 88.8 36.6 73.5 71.2 72.5

Long-Context MLLM Benchmarks

Model #Param MM-NIAH/Image MM-NIAH/Text MM-NIAH/Avg Milebench/T Milebench/S Milebench/NI Milebench/Avg VideoMME MVBench
InternVL2-2B 2.0B 23.0 18.9 21.0 58.2 54.5 37.0 49.9 - -
Phi-3-Vision 2.7B - - - 46.9 50.0 - - - -
OmChat 3.9B - - - 51.4 52.0 - - 45.9 50.2
LongLLaVA 9B - - - 47.3 46.8 - - 43.7 49.1
LongLLaVA 13B - - - 52.7 52.1 - - 51.6 54.6
VILA 13B 14.5 40.5 27.5 - - - - - -
Gemini-1.5 - 28.5 82.1 55.2 50.2 58.3 97.9 68.8 69.6 -
GPT-4V - - 84.1 - 45.6 58.9 99.4 68.0 59.9 43.5
GPT-4o - - - - 56.2 63.5 - - 64.7 -
Claude3-Opus - - - - 37.4 48.1 85.3 56.9 59.7 -
InternVL2-V2PE-32K 2.0B 78.1 85.7 81.8 65.5 56.4 97.2 72.5 50.7 65.6

Usage

Please refer to our GitHub Repo.

Citation

If you find this work helpful in your research, please consider citing:

@misc{ge2024v2peimprovingmultimodallongcontext,
      title={V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding}, 
      author={Junqi Ge and Ziyi Chen and Jintao Lin and Jinguo Zhu and Xihui Liu and Jifeng Dai and Xizhou Zhu},
      year={2024},
      eprint={2412.09616},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.09616}, 
}
``