Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -41,9 +41,23 @@ We investigate domain adaptation of MLLMs through post-training, focusing on dat
|
|
41 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/-Jp7pAsCR2Tj4WwfwsbCo.png" width="600">
|
42 |
</p>
|
43 |
|
44 |
-
##
|
|
|
45 |
|
46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
You can load datasets using the `datasets` library:
|
48 |
```python
|
49 |
from datasets import load_dataset
|
@@ -53,29 +67,34 @@ task_name = 'FoodSeg103' # Options: 'Food101', 'FoodSeg103', 'Nutrition5K', 'Re
|
|
53 |
|
54 |
# Load the dataset for the chosen task
|
55 |
data = load_dataset('AdaptLLM/food-VQA-benchmark', task_name, split='test')
|
56 |
-
```
|
57 |
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
|
|
63 |
|
64 |
-
|
|
|
|
|
65 |
|
66 |
-
####
|
|
|
|
|
67 |
```python
|
68 |
import json
|
69 |
|
70 |
# Load the mapping file
|
71 |
map_path = 'food101_name_to_label_map.json'
|
72 |
name_to_label_map = json.load(open(map_path))
|
|
|
73 |
|
74 |
# Reverse mapping: label to name
|
75 |
-
label_to_name_map = {value: key
|
76 |
```
|
77 |
|
78 |
-
|
79 |
```python
|
80 |
import json
|
81 |
|
@@ -94,16 +113,117 @@ id2name_map = {int(key): value for key, value in id2name_map.items()}
|
|
94 |
name2id_map = {value: key for key, value in id2name_map.items()}
|
95 |
```
|
96 |
|
97 |
-
|
98 |
```python
|
99 |
from nutrition5k_ingredients import all_ingredients
|
100 |
|
101 |
# Create mappings
|
102 |
id2name_map = dict(zip(range(0, len(all_ingredients)), all_ingredients))
|
103 |
name2id_map = {value: key for key, value in id2name_map.items()}
|
104 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
105 |
|
106 |
|
|
|
|
|
107 |
|
108 |
|
109 |
## Citation
|
|
|
41 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/-Jp7pAsCR2Tj4WwfwsbCo.png" width="600">
|
42 |
</p>
|
43 |
|
44 |
+
## Resources
|
45 |
+
**🤗 We share our data and models with example usages, feel free to open any issues or discussions! 🤗**
|
46 |
|
47 |
+
| Model | Repo ID in HF 🤗 | Domain | Base Model | Training Data | Evaluation Benchmark |
|
48 |
+
|:----------------------------------------------------------------------------|:--------------------------------------------|:--------------|:-------------------------|:------------------------------------------------------------------------------------------------|-----------------------|
|
49 |
+
| [Visual Instruction Synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) | AdaptLLM/visual-instruction-synthesizer | - | open-llava-next-llama3-8b | VisionFLAN and ALLaVA | - |
|
50 |
+
| [AdaMLLM-med-2B](https://huggingface.co/AdaptLLM/biomed-Qwen2-VL-2B-Instruct) | AdaptLLM/biomed-Qwen2-VL-2B-Instruct | Biomedicine | Qwen2-VL-2B-Instruct | [biomed-visual-instructions](https://huggingface.co/datasets/AdaptLLM/biomed-visual-instructions) | [biomed-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/biomed-VQA-benchmark) |
|
51 |
+
| [AdaMLLM-food-2B](https://huggingface.co/AdaptLLM/food-Qwen2-VL-2B-Instruct) | AdaptLLM/food-Qwen2-VL-2B-Instruct | Food | Qwen2-VL-2B-Instruct | [food-visual-instructions](https://huggingface.co/datasets/AdaptLLM/food-visual-instructions) | [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark) |
|
52 |
+
| [AdaMLLM-med-8B](https://huggingface.co/AdaptLLM/biomed-LLaVA-NeXT-Llama3-8B) | AdaptLLM/biomed-LLaVA-NeXT-Llama3-8B | Biomedicine | open-llava-next-llama3-8b | [biomed-visual-instructions](https://huggingface.co/datasets/AdaptLLM/biomed-visual-instructions) | [biomed-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/biomed-VQA-benchmark) |
|
53 |
+
| [AdaMLLM-food-8B](https://huggingface.co/AdaptLLM/food-LLaVA-NeXT-Llama3-8B) |AdaptLLM/food-LLaVA-NeXT-Llama3-8B | Food | open-llava-next-llama3-8b | [food-visual-instructions](https://huggingface.co/datasets/AdaptLLM/food-visual-instructions) | [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark) |
|
54 |
+
| [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/biomed-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/biomed-Llama-3.2-11B-Vision-Instruct | Biomedicine | Llama-3.2-11B-Vision-Instruct | [biomed-visual-instructions](https://huggingface.co/datasets/AdaptLLM/biomed-visual-instructions) | [biomed-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/biomed-VQA-benchmark) |
|
55 |
+
| [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/food-Llama-3.2-11B-Vision-Instruct | Food | Llama-3.2-11B-Vision-Instruct | [food-visual-instructions](https://huggingface.co/datasets/AdaptLLM/food-visual-instructions) | [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark) |
|
56 |
+
|
57 |
+
**Code**: [https://github.com/bigai-ai/QA-Synthesizer](https://github.com/bigai-ai/QA-Synthesizer)
|
58 |
+
|
59 |
+
|
60 |
+
## 1. Download Data
|
61 |
You can load datasets using the `datasets` library:
|
62 |
```python
|
63 |
from datasets import load_dataset
|
|
|
67 |
|
68 |
# Load the dataset for the chosen task
|
69 |
data = load_dataset('AdaptLLM/food-VQA-benchmark', task_name, split='test')
|
|
|
70 |
|
71 |
+
print(list(data)[0])
|
72 |
+
```
|
73 |
+
|
74 |
+
The mapping between category names and indices for `Food101`, `FoodSeg103`, and `Nutrition5K` datasets is provided in the following files:
|
75 |
+
<details>
|
76 |
+
<summary> Click to expand </summary>
|
77 |
|
78 |
+
- Food101: `food101_name_to_label_map.json`
|
79 |
+
- FoodSeg103: `foodSeg103_id2label.json`
|
80 |
+
- Nutrition5K: `nutrition5k_ingredients.py`
|
81 |
|
82 |
+
#### Example Usages:
|
83 |
+
|
84 |
+
**Food101**
|
85 |
```python
|
86 |
import json
|
87 |
|
88 |
# Load the mapping file
|
89 |
map_path = 'food101_name_to_label_map.json'
|
90 |
name_to_label_map = json.load(open(map_path))
|
91 |
+
name_to_label_map = {key.replace('_', ' '): value for key, value in name_to_label_map.items()}
|
92 |
|
93 |
# Reverse mapping: label to name
|
94 |
+
label_to_name_map = {value: key for key, value in name_to_label_map.items()}
|
95 |
```
|
96 |
|
97 |
+
**FoodSeg103**
|
98 |
```python
|
99 |
import json
|
100 |
|
|
|
113 |
name2id_map = {value: key for key, value in id2name_map.items()}
|
114 |
```
|
115 |
|
116 |
+
**Nutrition5K**
|
117 |
```python
|
118 |
from nutrition5k_ingredients import all_ingredients
|
119 |
|
120 |
# Create mappings
|
121 |
id2name_map = dict(zip(range(0, len(all_ingredients)), all_ingredients))
|
122 |
name2id_map = {value: key for key, value in id2name_map.items()}
|
123 |
+
```
|
124 |
+
</details>
|
125 |
+
|
126 |
+
|
127 |
+
## 2. Evaluate Any MLLM Compatible with vLLM on the Food Benchmarks
|
128 |
+
|
129 |
+
We provide a guide to directly evaluate MLLMs such as LLaVA-v1.6 ([open-source version](https://huggingface.co/Lin-Chen/open-llava-next-llama3-8b)), Qwen2-VL-Instruct, and Llama-3.2-Vision-Instruct.
|
130 |
+
**The dataset loading script is embedded in the inference code, so you can directly run the following commands to evaluate MLLMs.**
|
131 |
+
|
132 |
+
To evaluate other MLLMs, refer to [this guide](https://github.com/vllm-project/vllm/blob/main/examples/offline_inference_vision_language.py) for modifying the `BaseTask` class in the [vllm_inference/utils/task.py](https://github.com/bigai-ai/QA-Synthesizer/blob/main/vllm_inference/utils/task.py) file.
|
133 |
+
Feel free reach out to us for assistance!
|
134 |
+
|
135 |
+
### 1) Setup
|
136 |
+
|
137 |
+
Install vLLM using `pip` or [from source](https://vllm.readthedocs.io/en/latest/getting_started/installation.html#build-from-source).
|
138 |
+
|
139 |
+
As recommended in the official vLLM documentation, install vLLM in a **fresh new** conda environment:
|
140 |
+
|
141 |
+
```bash
|
142 |
+
conda create -n vllm python=3.10 -y
|
143 |
+
conda activate vllm
|
144 |
+
pip install vllm # Ensure vllm>=0.6.2 for compatibility with Llama-3.2. If Llama-3.2 is not used, vllm==0.6.1 is sufficient.
|
145 |
+
```
|
146 |
+
|
147 |
+
Clone the repository and navigate to the inference directory:
|
148 |
+
|
149 |
+
```bash
|
150 |
+
git clone https://github.com/bigai-ai/QA-Synthesizer.git
|
151 |
+
cd QA-Synthesizer/vllm_inference
|
152 |
+
RESULTS_DIR=./eval_results # Directory for saving evaluation scores
|
153 |
+
```
|
154 |
+
|
155 |
+
### 2) Evaluate
|
156 |
+
|
157 |
+
Run the following commands:
|
158 |
+
|
159 |
+
```bash
|
160 |
+
# Specify the domain: choose from ['food', 'Recipe1M', 'Nutrition5K', 'Food101', 'FoodSeg103']
|
161 |
+
# 'food' runs inference on all food tasks; others run on individual tasks.
|
162 |
+
DOMAIN='food'
|
163 |
+
|
164 |
+
# Specify the model type: choose from ['llava', 'qwen2_vl', 'mllama']
|
165 |
+
# For LLaVA-v1.6, Qwen2-VL, and Llama-3.2-Vision-Instruct, respectively.
|
166 |
+
MODEL_TYPE='qwen2_vl'
|
167 |
+
|
168 |
+
# Set the model repository ID on Hugging Face. Examples:
|
169 |
+
# "Qwen/Qwen2-VL-2B-Instruct", "AdaptLLM/food-Qwen2-VL-2B-Instruct" for MLLMs based on Qwen2-VL-Instruct.
|
170 |
+
# "meta-llama/Llama-3.2-11B-Vision-Instruct", "AdaptLLM/food-Llama-3.2-11B-Vision-Instruct" for MLLMs based on Llama-3.2-Vision-Instruct.
|
171 |
+
# "AdaptLLM/food-LLaVA-NeXT-Llama3-8B" for MLLMs based on LLaVA-v1.6.
|
172 |
+
MODEL=AdaptLLM/food-Qwen2-VL-2B-Instruct
|
173 |
+
|
174 |
+
# Set the directory for saving model prediction outputs:
|
175 |
+
OUTPUT_DIR=./output/AdaMLLM-food-LLaVA-8B_${DOMAIN}
|
176 |
+
|
177 |
+
# Run inference with data parallelism; adjust CUDA devices as needed:
|
178 |
+
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR}
|
179 |
+
```
|
180 |
+
|
181 |
+
Detailed scripts to reproduce our results:
|
182 |
+
|
183 |
+
<details>
|
184 |
+
<summary> Click to expand </summary>
|
185 |
+
|
186 |
+
```bash
|
187 |
+
# Choose from ['food', 'Recipe1M', 'Nutrition5K', 'Food101', 'FoodSeg103']
|
188 |
+
# 'food' runs inference on all food tasks; others run on a single task
|
189 |
+
DOMAIN='food'
|
190 |
+
|
191 |
+
# 1. LLaVA-v1.6-8B
|
192 |
+
MODEL_TYPE='llava'
|
193 |
+
MODEL=AdaptLLM/food-LLaVA-NeXT-Llama3-8B # HuggingFace repo ID for AdaMLLM-food-8B
|
194 |
+
OUTPUT_DIR=./output/AdaMLLM-food-LLaVA-8B_${DOMAIN}
|
195 |
+
|
196 |
+
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR}
|
197 |
+
|
198 |
+
# 2. Qwen2-VL-2B
|
199 |
+
MODEL_TYPE='qwen2_vl'
|
200 |
+
MODEL=Qwen/Qwen2-VL-2B-Instruct # HuggingFace repo ID for Qwen2-VL
|
201 |
+
OUTPUT_DIR=./output/Qwen2-VL-2B-Instruct_${DOMAIN}
|
202 |
+
|
203 |
+
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR}
|
204 |
+
|
205 |
+
MODEL=AdaptLLM/food-Qwen2-VL-2B-Instruct # HuggingFace repo ID for AdaMLLM-food-2B
|
206 |
+
OUTPUT_DIR=./output/AdaMLLM-food-Qwen-2B_${DOMAIN}
|
207 |
+
|
208 |
+
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR}
|
209 |
+
|
210 |
+
# 3. Llama-3.2-11B
|
211 |
+
MODEL_TYPE='mllama'
|
212 |
+
MODEL=meta-llama/Llama-3.2-11B-Vision-Instruct # HuggingFace repo ID for Llama3.2
|
213 |
+
OUTPUT_DIR=./output/Llama-3.2-11B-Vision-Instruct_${DOMAIN}
|
214 |
+
|
215 |
+
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR}
|
216 |
+
|
217 |
+
MODEL=AdaptLLM/food-Llama-3.2-11B-Vision-Instruct # HuggingFace repo ID for AdaMLLM-food-11B
|
218 |
+
OUTPUT_DIR=./output/AdaMLLM-food-Llama3.2-2B_${DOMAIN}
|
219 |
+
|
220 |
+
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR}
|
221 |
+
```
|
222 |
+
</details>
|
223 |
|
224 |
|
225 |
+
### 3) Results
|
226 |
+
The evaluation results are stored in `./eval_results`, and the model prediction outputs are in `./output`.
|
227 |
|
228 |
|
229 |
## Citation
|