mx262 commited on
Commit
1a08e09
β€’
1 Parent(s): 4e4fbd8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +160 -3
README.md CHANGED
@@ -1,3 +1,160 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # Mini-Monkey: Multi-Scale Adaptive Cropping for Multimodal Large Language Models
5
+
6
+ <br>
7
+
8
+ <p align="center">
9
+ <img src="https://v1.ax1x.com/2024/08/13/7GXu34.png" width="300"/>
10
+ <p>
11
+
12
+ > [**Mini-Monkey: Multi-Scale Adaptive Cropping for Multimodal Large Language Models**](https://arxiv.org/abs/2408.02034)<br>
13
+ > Mingxin Huang, Yuliang Liu, Dingkang Liang, Lianwen Jin, Xiang Bai <br>
14
+
15
+ [![arXiv](https://img.shields.io/badge/Arxiv-2408.02034-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2408.02034)
16
+ [![Demo](https://img.shields.io/badge/Demo-blue)](http://vlrlab-monkey.xyz:7685)
17
+ [![Model Weight](https://img.shields.io/badge/Model_Weight-gray)](https://www.wisemodel.cn/models/HUST-VLRLab/Mini-Monkey)
18
+
19
+
20
+ -----
21
+
22
+ **Mini-Monkey** is a lightweight MLLM that incorporates a plug-and-play method called multi-scale adaptive cropping strategy (MSAC). Mini-Monkey adaptively generates multi-scale representations, allowing it to select non-segmented objects from various scales. To mitigate the computational overhead introduced by MSAC, we propose a Scale Compression Mechanism (SCM), which effectively compresses image tokens. Mini-Monkey achieves state-of-the-art performance among 2B-parameter MLLMs. It not only demonstrates leading performance on a variety of general multimodal understanding tasks but also shows consistent improvements in document understanding capabilities. On the OCRBench, Mini-Monkey achieves a score of 802, outperforming 8B-parameter state-of-the-art model InternVL2-8B. Besides, our model and training strategy are very efficient, which can be trained with only eight RTX 3090.
23
+
24
+
25
+ # TODO
26
+
27
+ - [x] Open source code, weight, and data
28
+ - [x] Support training using 3090 GPUs (24Gb video memory)
29
+ - [ ] Mini-Monkey with different LLMs
30
+
31
+
32
+ # Model Zoo
33
+
34
+ Mini-Monkey was trained using 8 3090 GPUs on a dataset
35
+
36
+ | Model | #param | MME | RWQA | AI2D | CCB | SEED | HallB | POPE | MathVista | DocVQA | ChartQA | InfoVQA$ | TextVQA | OCRBench |
37
+ |-------|---------|-----|------|------|-----|------|-------|------|-----------|-------------------|-------------------|-------------------|----------------|----------|
38
+ | Mini-Gemini | 35B | 2141.0 | - | - | - | - | - | - | 43.3 | - | - | - | - | - |
39
+ | LLaVA-NeXT | 35B | 2028.0 | - | 74.9 | 49.2 | 75.9 | 34.8 | 89.6 | 46.5 | - | - | - | - | - |
40
+ | InternVL 1.2 | 40B | 2175.4 | 67.5 | 79.0 | 59.2 | 75.6 | 47.6 | 88.0 | 47.7 | - | - | - | - | - |
41
+ | InternVL 1.5 | 26B | 2187.8 | 66.0 | 80.7 | 69.8 | 76.0 | 49.3 | 88.3 | 53.5 | 90.9 | 83.8 | 72.5 | 80.6 | 724 |
42
+ | DeepSeek-VL | 1.7B | 1531.6 | 49.7 | 51.5 | 37.6 | 43.7 | 27.6 | 85.9 | 29.4 | - | - | - | - | - |
43
+ | Mini-Gemini | 2.2B | 1653.0 | - | - | - | - | - | - | 29.4 | - | - | - | - | - |
44
+ | Bunny-StableLM-2 | 2B | 1602.9 | - | - | - | 58.8 | - | 85.9 | - | - | - | - | - | - |
45
+ | MiniCPM-V-2 | 2.8B | 1808.6 | 55.8 | 62.9 | 48.0 | - | 36.1 | 86.3 | 38.7 | 71.9 | 55.6 | - | 74.1 | 605 |
46
+ | InternVL 2 | 2B | 1876.8 | 57.3 | 74.1 | 74.7 | 70.9 | 37.9 | 85.2 | 46.3 | 86.9 | 76.2 | 58.9 | 73.4 | 784 |
47
+ | Mini-Monkey (ours) | 2B | 1881.9 | 57.5 | 74.7 | 75.5 | 71.3 | 38.7 | 86.7 | 47.3 | 87.4 | 76.5 | 60.1 | 75.7 | 802 |
48
+
49
+
50
+ ## Environment
51
+
52
+ ```python
53
+ conda create -n minimonkey python=3.10
54
+ conda activate minimonkey
55
+ git clone https://github.com/Yuliang-Liu/Monkey.git
56
+ cd ./Monkey/project/mini_monkey
57
+ pip install -r requirements.txt
58
+ ```
59
+ Install `flash-attn==2.3.6`:
60
+ ```bash
61
+ pip install flash-attn==2.3.6 --no-build-isolation
62
+ ```
63
+
64
+ Alternatively you can compile from source:
65
+
66
+ ```bash
67
+ git clone https://github.com/Dao-AILab/flash-attention.git
68
+ cd flash-attention
69
+ git checkout v2.3.6
70
+ python setup.py install
71
+ ```
72
+
73
+
74
+ ## Evaluate
75
+
76
+ We use [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) repositories for model evaluation.
77
+
78
+ ## Inference
79
+ We provide an example of inference code [here](https://github.com/Yuliang-Liu/Monkey/blob/main/project/mini_monkey/demo.py)
80
+
81
+ ## Train
82
+
83
+ ### Prepare Training Datasets
84
+
85
+ Inspired by InternVL 1.2, we adopted a [LLaVA-ZH](https://huggingface.co/datasets/openbmb/llava_zh), [DVQA](https://github.com/kushalkafle/DVQA_dataset), [ChartQA](https://github.com/vis-nlp/ChartQA), [AI2D](https://allenai.org/data/diagrams), [DocVQA](https://www.docvqa.org/datasets), [GeoQA+](https://github.com/SCNU203/GeoQA-Plus), and [SynthDoG-EN](https://huggingface.co/datasets/naver-clova-ix/synthdog-en). Most of the data remains consistent with InternVL 1.2.
86
+
87
+ First, download the [annotation files](https://huggingface.co/OpenGVLab/InternVL/resolve/main/playground.zip) and place them in the `playground/opensource/` folder.
88
+
89
+ Second, download all the images we used.
90
+
91
+ - AI2D: [ai2d_images](https://drive.google.com/file/d/1dqqa3MnrxMXaU_K9JA6C83je32ibwdOY/view?usp=sharing) (provided by InternLM-XComposer)
92
+ - ChartQA: [ChartQA Dataset](https://huggingface.co/datasets/ahmed-masry/ChartQA/resolve/main/ChartQA%20Dataset.zip)
93
+ - COCO: [train2017](http://images.cocodataset.org/zips/train2017.zip)
94
+ - DocVQA: [train](https://datasets.cvc.uab.es/rrc/DocVQA/train.tar.gz), [val](https://datasets.cvc.uab.es/rrc/DocVQA/val.tar.gz), [test](https://datasets.cvc.uab.es/rrc/DocVQA/test.tar.gz)
95
+ - DVQA: [images](https://drive.google.com/file/d/1iKH2lTi1-QxtNUVRxTUWFvUvRHq6HAsZ/view)
96
+ - LLaVA-Pretrain: [images](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain/resolve/main/images.zip)
97
+ - SynthDoG-EN: We only use 00000~00004 parquet files for now, with a total of 30K images. We provide the converted [images](https://huggingface.co/OpenGVLab/InternVL/resolve/main/synthdog-en-images.zip).
98
+ - GeoQA+: [GeoQA+](https://drive.google.com/file/d/1KL4_wIzr3p8XSKMkkLgYcYwCbb0TzZ9O/view) [images](https://huggingface.co/OpenGVLab/InternVL/resolve/main/geoqa%2B_images.zip)
99
+
100
+ Then, organize the data as follows in `playground/data`:
101
+
102
+ ```none
103
+ playground/
104
+ β”œβ”€β”€ opensource
105
+ β”‚ β”œβ”€β”€ ai2d_train_12k.jsonl
106
+ β”‚ β”œβ”€β”€ chartqa_train_18k.jsonl
107
+ β”‚ β”œβ”€β”€ docvqa_train_10k.jsonl
108
+ β”‚ β”œβ”€β”€ dvqa_train_200k.jsonl
109
+ β”‚ β”œβ”€β”€ geoqa+.jsonl
110
+ β”‚ β”œβ”€β”€ llava_instruct_150k_zh.jsonl
111
+ β”‚ └── synthdog_en.jsonl
112
+ β”œβ”€β”€ data
113
+ β”‚ β”œβ”€β”€ ai2d
114
+ β”‚ β”‚ β”œβ”€β”€ abc_images
115
+ β”‚ β”‚ └── images
116
+ β”‚ β”œβ”€β”€ chartqa
117
+ β”‚ β”‚ β”œβ”€β”€ test
118
+ β”‚ β”‚ β”œβ”€β”€ train
119
+ β”‚ β”‚ └── val
120
+ β”‚ β”œβ”€β”€ coco
121
+ β”‚ β”‚ └── train2017
122
+ β”‚ β”œβ”€β”€ docvqa
123
+ β”‚ β”‚ β”œβ”€β”€ test
124
+ β”‚ β”‚ β”œβ”€β”€ train
125
+ β”‚ β”‚ └── val
126
+ β”‚ β”œβ”€β”€ dvqa
127
+ β”‚ β”‚ └── images
128
+ β”‚ β”œβ”€β”€ llava
129
+ β”‚ β”‚ └── llava_pretrain
130
+ β”‚ β”‚ └── images
131
+ β”‚ β”œβ”€β”€ synthdog-en
132
+ β”‚ β”‚ └── images
133
+ β”‚ β”œβ”€β”€ geoqa+
134
+ β”‚ β”‚ └── images
135
+ ```
136
+
137
+ Execute the training code:
138
+ ```python
139
+ sh shell/minimonkey/minimonkey_finetune_full.sh
140
+ ```
141
+
142
+
143
+
144
+ ## Citing Mini-Monkey
145
+
146
+ If you wish to refer to the baseline results published here, please use the following BibTeX entries:
147
+
148
+ ```BibTeX
149
+ @article{huang2024mini,
150
+ title={Mini-Monkey: Multi-Scale Adaptive Cropping for Multimodal Large Language Models},
151
+ author={Huang, Mingxin and Liu, Yuliang and Liang, Dingkang and Jin, Lianwen and Bai, Xiang},
152
+ journal={arXiv preprint arXiv:2408.02034},
153
+ year={2024}
154
+ }
155
+ ```
156
+
157
+
158
+ ## Copyright
159
+
160
+ We welcome suggestions to help us improve the Mini-Monkey. For any query, please contact Dr. Yuliang Liu: ylliu@hust.edu.cn. If you find something interesting, please also feel free to share with us through email or open an issue.