Commit
โข
024753e
1
Parent(s):
162d1da
Update README.md (#2)
Browse files- Update README.md (a680babae972fc3fd49d8bf05a90f0fb628a0fe2)
Co-authored-by: Lin <dreamerlin@users.noreply.huggingface.co>
README.md
CHANGED
@@ -8,4 +8,80 @@ language:
|
|
8 |
pretty_name: V2PE-Data
|
9 |
size_categories:
|
10 |
- 100B<n<1T
|
11 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
pretty_name: V2PE-Data
|
9 |
size_categories:
|
10 |
- 100B<n<1T
|
11 |
+
---
|
12 |
+
# V2PE-Data
|
13 |
+
|
14 |
+
[\[๐ GitHub\]](https://github.com/OpenGVLab/V2PE) [\[๐ Blog\]](https://zzdhybthu.github.io/V2PE.github.io/) [\[๐ Paper\]](https://arxiv.org/abs/2412.09616) [\[๐ค HF Models\]](https://huggingface.co/OpenGVLab/V2PE)
|
15 |
+
|
16 |
+
![image.png](https://cdn-uploads.huggingface.co/production/uploads/646f23418180f35af53531a6/ewbZmWctNv-uLFlnMCGK9.png)
|
17 |
+
|
18 |
+
## Summary
|
19 |
+
|
20 |
+
We introduce two augmented long-context multimodal datasets: **Long Visual Question Answering** and **Long multimodal Retrieval**. These datasets aim to enhance VLMs' long-context training and establish a systematic evaluation framework, thereby addressing the challenges associated with long-context understanding that extend beyond the scope of existing training data.
|
21 |
+
|
22 |
+
|
23 |
+
![image.png](https://cdn-uploads.huggingface.co/production/uploads/646f23418180f35af53531a6/93ts7Q204GAX-Lu6tLnY8.png)
|
24 |
+
|
25 |
+
|
26 |
+
- **Long Visual Question Answering (Long-VQA):** The Long-VQA dataset aims to evaluate the capabilities of VLMs in understanding and reasoning over long multimodal sequences within general visual question-answering tasks. We extended 17 widely adopted datasets (e.g., DocVQA, GQA, SQA), expanding their content from short sequences to those containing up to 32K tokens. The tasks involve answering questions that require commonsense reasoning, factual knowledge, and interpretation of visual information from charts, documents, and real-world texts. Long-VQA contains 533K samples: 392K for training (up to 32K tokens) and 141K for validation (up to 64K tokens) to evaluate the generalization to longer contexts.
|
27 |
+
|
28 |
+
![image.png](https://cdn-uploads.huggingface.co/production/uploads/646f23418180f35af53531a6/gkfXER4GLtFGYpjQ0gu7G.png)
|
29 |
+
|
30 |
+
- **Long Multimodal Retrieval (Long-MR):** we developed Long-MR by inserting a target image or textual segment into sequences of interleaved images and texts. Long-MR evaluates VLMs' ability to retrieve specific targets from ultra-long multimodal sequences, requiring models to locate the inserted "needle" and answer associated questions. We generated two subsets of Long-MR: Long-MR-32K (488K samples, sequences up to 32K tokens) and Long-MR-256K (50K samples, sequences up to 256K tokens), following the data construction process of MM-NIAH. To assess the limits of VLMs' long-context capabilities, we further extend the official MM-NIAH evaluation benchmark by generating testing samples with sequence lengths ranging from 64K to 1M tokens, resulting in the MM-NIAH-1M benchmark. This extension pushes the testing capacity beyond the original MM-NIAH, which was limited to sequences of up to 64K tokens.
|
31 |
+
|
32 |
+
![image.png](https://cdn-uploads.huggingface.co/production/uploads/646f23418180f35af53531a6/mEpfOPY0gue_BHDDNCOMH.png)
|
33 |
+
|
34 |
+
Please refer to our [paper](https://arxiv.org/abs/2412.09616) for more details.
|
35 |
+
|
36 |
+
## Evaluation Results of [Released Model](https://huggingface.co/OpenGVLab/V2PE)
|
37 |
+
|
38 |
+
**General MLLM Benchmarks**
|
39 |
+
|
40 |
+
| Model | #Param | ChartQA | DocVQA | AI2D | InfoVQA | SQA | POPE | MMMU<sub>val</sub> | MMBench<sub>EN</sub> | SEED<sub>I</sub> | Avg |
|
41 |
+
|---------------------------|--------|---------|--------|-------|---------|-------|-------|--------------------|---------------------|------------------|-------|
|
42 |
+
| InternVL2-2B | 2.0B | 71.7 | 86.9 | 74.1 | 58.9 | 94.1 | 85.2 | 36.3 | 73.4 | 70.9 | 72.4 |
|
43 |
+
| DeepSeek-VL-1.3B | 2.0B | 47.4 | - | 51.5 | - | 68.4 | 85.9 | 33.8 | 66.4 | 66.0 | - |
|
44 |
+
| Qwen2-VL-2B | 2.0B | 73.5 | 90.1 | 74.7 | 65.5 | - | - | 41.1 | 74.9 | - | - |
|
45 |
+
| Aquila-VL-2B | 2.2B | 32.0 | 85.0 | 75.1 | 58.3 | 95.1 | 83.1 | 46.9 | 79.0 | 73.9 | 69.8 |
|
46 |
+
| MiniCPM-V-2 | 2.8B | 55.6 | 71.9 | 62.9 | - | 80.7 | 86.3 | 38.2 | 64.1 | 67.1 | - |
|
47 |
+
| Vintern-3B-beta | 3.7B | 68.3 | - | 69.1 | - | 75.0 | 87.4 | 46.7 | 70.6 | 70.0 | - |
|
48 |
+
| Llama 3.2 11B | 11B | 83.4 | 88.4 | 91.1 | - | - | - | 50.7 | 68.0 | - | - |
|
49 |
+
| Qwen2-VL-72B | 73B | 88.3 | 96.5 | 88.1 | 84.5 | 91.2 | 87.2 | 64.5 | 86.9 | 77.9 | 85.0 |
|
50 |
+
| GPT-4o | - | 85.7 | 92.8 | 84.7 | - | 90.1 | 97.2 | 69.1 | 82.1 | 76.7 | - |
|
51 |
+
| **InternVL2-V2PE-32K** | 2.0B | **76.4** | **83.9** | **73.2** | **55.9** | **94.9** | **88.8** | **36.6** | **73.5** | **71.2** | **72.5** |
|
52 |
+
|
53 |
+
**Long-Context MLLM Benchmarks**
|
54 |
+
|
55 |
+
| Model | #Param | MM-NIAH/Image | MM-NIAH/Text | MM-NIAH/Avg | Milebench/T | Milebench/S | Milebench/NI | Milebench/Avg | VideoMME | MVBench |
|
56 |
+
|--------------------------|--------|---------------|--------------|-------------|--------------|--------------|---------------|--------------|------------|------------|
|
57 |
+
| InternVL2-2B | 2.0B | 23.0 | 18.9 | 21.0 | 58.2 | 54.5 | 37.0 | 49.9 | - | - |
|
58 |
+
| Phi-3-Vision | 2.7B | - | - | - | 46.9 | 50.0 | - | - | - | - |
|
59 |
+
| OmChat | 3.9B | - | - | - | 51.4 | 52.0 | - | - | 45.9 | 50.2 |
|
60 |
+
| LongLLaVA | 9B | - | - | - | 47.3 | 46.8 | - | - | 43.7 | 49.1 |
|
61 |
+
| LongLLaVA | 13B | - | - | - | 52.7 | 52.1 | - | - | 51.6 | 54.6 |
|
62 |
+
| VILA | 13B | 14.5 | 40.5 | 27.5 | - | - | - | - | - | - |
|
63 |
+
| Gemini-1.5 | - | 28.5 | 82.1 | 55.2 | 50.2 | 58.3 | 97.9 | **68.8** | **69.6** | - |
|
64 |
+
| GPT-4V | - | - | 84.1 | - | 45.6 | 58.9 | **99.4** | 68.0 | 59.9 | 43.5 |
|
65 |
+
| GPT-4o | - | - | - | - | 56.2 | **63.5** | - | - | 64.7 | - |
|
66 |
+
| Claude3-Opus | - | - | - | - | 37.4 | 48.1 | 85.3 | 56.9 | 59.7 | - |
|
67 |
+
| **InternVL2-V2PE-32K** | 2.0B | **78.1** | **85.7** | **81.8** | **65.5** | 56.4 | 97.2 | 72.5 | 50.7 | **65.6** |
|
68 |
+
|
69 |
+
## Usage
|
70 |
+
|
71 |
+
Please refer to our [GitHub Repo](https://github.com/OpenGVLab/V2PE?tab=readme-ov-file#prepare-training-datasets).
|
72 |
+
|
73 |
+
## Citation
|
74 |
+
|
75 |
+
If you find this work helpful in your research, please consider citing:
|
76 |
+
|
77 |
+
```bibtex
|
78 |
+
@misc{ge2024v2peimprovingmultimodallongcontext,
|
79 |
+
title={V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding},
|
80 |
+
author={Junqi Ge and Ziyi Chen and Jintao Lin and Jinguo Zhu and Xihui Liu and Jifeng Dai and Xizhou Zhu},
|
81 |
+
year={2024},
|
82 |
+
eprint={2412.09616},
|
83 |
+
archivePrefix={arXiv},
|
84 |
+
primaryClass={cs.CV},
|
85 |
+
url={https://arxiv.org/abs/2412.09616},
|
86 |
+
}
|
87 |
+
``
|