File size: 11,209 Bytes
4afe2b4
 
 
 
 
 
 
 
 
 
ed15ebb
4afe2b4
 
ed15ebb
 
4afe2b4
 
ed15ebb
 
4afe2b4
 
ed15ebb
 
4afe2b4
 
ed15ebb
4afe2b4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a503fc
 
4afe2b4
8a503fc
 
 
 
 
 
 
 
 
 
 
 
 
 
be6575d
4afe2b4
 
 
 
be6575d
4afe2b4
 
1f97ac4
be6575d
8a503fc
 
 
 
 
 
be6575d
8a503fc
 
 
be6575d
8a503fc
 
 
be6575d
 
 
 
 
 
8a503fc
be6575d
 
8a503fc
be6575d
 
8a503fc
be6575d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a503fc
be6575d
 
 
 
 
 
8a503fc
 
 
 
 
 
 
 
 
 
0094a61
 
8a503fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5b29cee
8a503fc
 
be6575d
5b29cee
be6575d
8a503fc
 
be6575d
4afe2b4
 
 
 
171178d
4afe2b4
 
 
 
 
 
 
 
 
171178d
 
 
 
 
 
 
 
 
 
 
4afe2b4
 
171178d
4afe2b4
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
---
task_categories:
- visual-question-answering
language:
- en
tags:
- Vision
- food
- recipe
configs:
- config_name: Recipe1M
  data_files:
  - split: test
    path: food_eval_multitask_v2/data-*.arrow
- config_name: Nutrition5K
  data_files:
  - split: test
    path: nutrition50k/data-*.arrow
- config_name: Food101
  data_files:
  - split: test
    path: food101/data-*.arrow
- config_name: FoodSeg103
  data_files:
  - split: test
    path: foodseg103/data-*.arrow
---

# Adapting Multimodal Large Language Models to Domains via Post-Training

This repos contains the **food visual instruction tasks for evaluating MLLMs** in our paper: [On Domain-Specific Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930).

The main project page is: [Adapt-MLLM-to-Domains](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains/edit/main/README.md)

We investigate domain adaptation of MLLMs through post-training, focusing on data synthesis, training pipelines, and task evaluation. 
**(1) Data Synthesis**: Using open-source models, we develop a visual instruction synthesizer that effectively generates diverse visual instruction tasks from domain-specific image-caption pairs. **Our synthetic tasks surpass those generated by manual rules, GPT-4, and GPT-4V in enhancing the domain-specific performance of MLLMs.** 
**(2) Training Pipeline**: While the two-stage training--initially on image-caption pairs followed by visual instruction tasks--is commonly adopted for developing general MLLMs, we apply a single-stage training pipeline to enhance task diversity for domain-specific post-training. 
**(3) Task Evaluation**: We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales (e.g., Qwen2-VL-2B, LLaVA-v1.6-8B, Llama-3.2-11B), and then evaluating MLLM performance on various domain-specific tasks.

<p align='left'>
    <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/-Jp7pAsCR2Tj4WwfwsbCo.png" width="600">
</p>

## Resources
**🤗 We share our data and models with example usages, feel free to open any issues or discussions! 🤗**

| Model                                                                       | Repo ID in HF 🤗                           | Domain       | Base Model              | Training Data                                                                                  | Evaluation Benchmark |
|:----------------------------------------------------------------------------|:--------------------------------------------|:--------------|:-------------------------|:------------------------------------------------------------------------------------------------|-----------------------|
| [Visual Instruction Synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) | AdaptLLM/visual-instruction-synthesizer     | -  | open-llava-next-llama3-8b    | VisionFLAN and ALLaVA | -                   |
| [AdaMLLM-med-2B](https://huggingface.co/AdaptLLM/biomed-Qwen2-VL-2B-Instruct) | AdaptLLM/biomed-Qwen2-VL-2B-Instruct     | Biomedicine  | Qwen2-VL-2B-Instruct    | [biomed-visual-instructions](https://huggingface.co/datasets/AdaptLLM/biomed-visual-instructions) | [biomed-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/biomed-VQA-benchmark)                   |
| [AdaMLLM-food-2B](https://huggingface.co/AdaptLLM/food-Qwen2-VL-2B-Instruct) | AdaptLLM/food-Qwen2-VL-2B-Instruct     | Food  | Qwen2-VL-2B-Instruct    | [food-visual-instructions](https://huggingface.co/datasets/AdaptLLM/food-visual-instructions) | [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark)                   |
| [AdaMLLM-med-8B](https://huggingface.co/AdaptLLM/biomed-LLaVA-NeXT-Llama3-8B) | AdaptLLM/biomed-LLaVA-NeXT-Llama3-8B     | Biomedicine  | open-llava-next-llama3-8b    | [biomed-visual-instructions](https://huggingface.co/datasets/AdaptLLM/biomed-visual-instructions) | [biomed-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/biomed-VQA-benchmark)                   |
| [AdaMLLM-food-8B](https://huggingface.co/AdaptLLM/food-LLaVA-NeXT-Llama3-8B) |AdaptLLM/food-LLaVA-NeXT-Llama3-8B     | Food  | open-llava-next-llama3-8b    | [food-visual-instructions](https://huggingface.co/datasets/AdaptLLM/food-visual-instructions) |  [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark)                   |
| [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/biomed-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/biomed-Llama-3.2-11B-Vision-Instruct     | Biomedicine  | Llama-3.2-11B-Vision-Instruct    | [biomed-visual-instructions](https://huggingface.co/datasets/AdaptLLM/biomed-visual-instructions) | [biomed-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/biomed-VQA-benchmark)                   |
| [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/food-Llama-3.2-11B-Vision-Instruct     | Food | Llama-3.2-11B-Vision-Instruct    | [food-visual-instructions](https://huggingface.co/datasets/AdaptLLM/food-visual-instructions) |  [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark)                   |

**Code**: [https://github.com/bigai-ai/QA-Synthesizer](https://github.com/bigai-ai/QA-Synthesizer)


## 1. Download Data  
You can load datasets using the `datasets` library:  
```python
from datasets import load_dataset

# Choose the task name from the list of available tasks
task_name = 'FoodSeg103'  # Options: 'Food101', 'FoodSeg103', 'Nutrition5K', 'Recipe1M'

# Load the dataset for the chosen task
data = load_dataset('AdaptLLM/food-VQA-benchmark', task_name, split='test')

print(list(data)[0])
```

The mapping between category names and indices for `Food101`, `FoodSeg103`, and `Nutrition5K` datasets is provided in the following files: 
<details>
<summary> Click to expand </summary>

- Food101: `food101_name_to_label_map.json`  
- FoodSeg103: `foodSeg103_id2label.json`  
- Nutrition5K: `nutrition5k_ingredients.py`  

#### Example Usages:

**Food101**
```python
import json

# Load the mapping file
map_path = 'food101_name_to_label_map.json'
name_to_label_map = json.load(open(map_path))
name_to_label_map = {key.replace('_', ' '): value for key, value in name_to_label_map.items()}

# Reverse mapping: label to name
label_to_name_map = {value: key for key, value in name_to_label_map.items()}
```  

**FoodSeg103**
```python
import json

# Load the mapping file
map_path = 'foodSeg103_id2label.json'
id2name_map = json.load(open(map_path))

# Remove background and irrelevant labels
id2name_map.pop("0")  # Background
id2name_map.pop("103")  # Other ingredients

# Convert keys to integers
id2name_map = {int(key): value for key, value in id2name_map.items()}

# Create reverse mapping: name to ID
name2id_map = {value: key for key, value in id2name_map.items()}
```  

**Nutrition5K** 
```python
from nutrition5k_ingredients import all_ingredients

# Create mappings
id2name_map = dict(zip(range(0, len(all_ingredients)), all_ingredients))
name2id_map = {value: key for key, value in id2name_map.items()}
```
</details>  


## 2. Evaluate Any MLLM Compatible with vLLM on the Food Benchmarks

We provide a guide to directly evaluate MLLMs such as LLaVA-v1.6 ([open-source version](https://huggingface.co/Lin-Chen/open-llava-next-llama3-8b)), Qwen2-VL-Instruct, and Llama-3.2-Vision-Instruct.  
To evaluate other MLLMs, refer to [this guide](https://github.com/vllm-project/vllm/blob/main/examples/offline_inference_vision_language.py) for modifying the `BaseTask` class in the [vllm_inference/utils/task.py](https://github.com/bigai-ai/QA-Synthesizer/blob/main/vllm_inference/utils/task.py) file.  
Feel free reach out to us for assistance!

**The dataset loading script is embedded in the inference code, so you can directly run the following commands to evaluate MLLMs.**  

### 1) Setup

Install vLLM using `pip` or [from source](https://vllm.readthedocs.io/en/latest/getting_started/installation.html#build-from-source).  

As recommended in the official vLLM documentation, install vLLM in a **fresh new** conda environment:

```bash
conda create -n vllm python=3.10 -y
conda activate vllm
pip install vllm  # Ensure vllm>=0.6.2 for compatibility with Llama-3.2. If Llama-3.2 is not used, vllm==0.6.1 is sufficient.
```

Clone the repository and navigate to the inference directory:

```bash
git clone https://github.com/bigai-ai/QA-Synthesizer.git
cd QA-Synthesizer/vllm_inference
RESULTS_DIR=./eval_results  # Directory for saving evaluation scores
```

### 2) Evaluate

Run the following commands:

```bash
# Specify the domain: choose from ['food', 'Recipe1M', 'Nutrition5K', 'Food101', 'FoodSeg103']
# 'food' runs inference on all food tasks; others run on individual tasks.
DOMAIN='food'

# Specify the model type: choose from ['llava', 'qwen2_vl', 'mllama']
# For LLaVA-v1.6, Qwen2-VL, and Llama-3.2-Vision-Instruct, respectively.
MODEL_TYPE='qwen2_vl'

# Set the model repository ID on Hugging Face. Examples:
# "Qwen/Qwen2-VL-2B-Instruct", "AdaptLLM/food-Qwen2-VL-2B-Instruct" for MLLMs based on Qwen2-VL-Instruct.
# "meta-llama/Llama-3.2-11B-Vision-Instruct", "AdaptLLM/food-Llama-3.2-11B-Vision-Instruct" for MLLMs based on Llama-3.2-Vision-Instruct.
# "AdaptLLM/food-LLaVA-NeXT-Llama3-8B" for MLLMs based on LLaVA-v1.6.
MODEL=AdaptLLM/food-Qwen2-VL-2B-Instruct

# Set the directory for saving model prediction outputs:
OUTPUT_DIR=./output/AdaMLLM-food-Qwen-2B_${DOMAIN}

# Run inference with data parallelism; adjust CUDA devices as needed:
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR}
```

Detailed scripts to reproduce our results are in [Evaluation.md](https://github.com/bigai-ai/QA-Synthesizer/blob/main/docs/Evaluation.md)

### 3) Results
The evaluation results are stored in `./eval_results`, and the model prediction outputs are in `./output`.


## Citation
If you find our work helpful, please cite us.

[AdaMLLM](https://huggingface.co/papers/2411.19930)
```bibtex
@article{adamllm,
  title={On Domain-Specific Post-Training for Multimodal Large Language Models},
  author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
  journal={arXiv preprint arXiv:2411.19930},
  year={2024}
}
```

[Instruction Pre-Training](https://huggingface.co/papers/2406.14491) (EMNLP 2024)
```bibtex
@article{cheng2024instruction,
  title={Instruction Pre-Training: Language Models are Supervised Multitask Learners},
  author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu},
  journal={arXiv preprint arXiv:2406.14491},
  year={2024}
}
```

[Adapt LLM to Domains](https://huggingface.co/papers/2309.09530) (ICLR 2024)
```bibtex
@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
```