food-VQA-benchmark / README.md
AdaptLLM's picture
Update README.md
171178d verified
metadata
task_categories:
  - visual-question-answering
language:
  - en
tags:
  - Vision
  - food
  - recipe
configs:
  - config_name: Recipe1M
    data_files:
      - split: test
        path: food_eval_multitask_v2/data-*.arrow
  - config_name: Nutrition5K
    data_files:
      - split: test
        path: nutrition50k/data-*.arrow
  - config_name: Food101
    data_files:
      - split: test
        path: food101/data-*.arrow
  - config_name: FoodSeg103
    data_files:
      - split: test
        path: foodseg103/data-*.arrow

Adapting Multimodal Large Language Models to Domains via Post-Training

This repos contains the food visual instruction tasks for evaluating MLLMs in our paper: On Domain-Specific Post-Training for Multimodal Large Language Models.

The main project page is: Adapt-MLLM-to-Domains

We investigate domain adaptation of MLLMs through post-training, focusing on data synthesis, training pipelines, and task evaluation. (1) Data Synthesis: Using open-source models, we develop a visual instruction synthesizer that effectively generates diverse visual instruction tasks from domain-specific image-caption pairs. Our synthetic tasks surpass those generated by manual rules, GPT-4, and GPT-4V in enhancing the domain-specific performance of MLLMs. (2) Training Pipeline: While the two-stage training--initially on image-caption pairs followed by visual instruction tasks--is commonly adopted for developing general MLLMs, we apply a single-stage training pipeline to enhance task diversity for domain-specific post-training. (3) Task Evaluation: We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales (e.g., Qwen2-VL-2B, LLaVA-v1.6-8B, Llama-3.2-11B), and then evaluating MLLM performance on various domain-specific tasks.

Resources

🤗 We share our data and models with example usages, feel free to open any issues or discussions! 🤗

Model Repo ID in HF 🤗 Domain Base Model Training Data Evaluation Benchmark
Visual Instruction Synthesizer AdaptLLM/visual-instruction-synthesizer - open-llava-next-llama3-8b VisionFLAN and ALLaVA -
AdaMLLM-med-2B AdaptLLM/biomed-Qwen2-VL-2B-Instruct Biomedicine Qwen2-VL-2B-Instruct biomed-visual-instructions biomed-VQA-benchmark
AdaMLLM-food-2B AdaptLLM/food-Qwen2-VL-2B-Instruct Food Qwen2-VL-2B-Instruct food-visual-instructions food-VQA-benchmark
AdaMLLM-med-8B AdaptLLM/biomed-LLaVA-NeXT-Llama3-8B Biomedicine open-llava-next-llama3-8b biomed-visual-instructions biomed-VQA-benchmark
AdaMLLM-food-8B AdaptLLM/food-LLaVA-NeXT-Llama3-8B Food open-llava-next-llama3-8b food-visual-instructions food-VQA-benchmark
AdaMLLM-med-11B AdaptLLM/biomed-Llama-3.2-11B-Vision-Instruct Biomedicine Llama-3.2-11B-Vision-Instruct biomed-visual-instructions biomed-VQA-benchmark
AdaMLLM-food-11B AdaptLLM/food-Llama-3.2-11B-Vision-Instruct Food Llama-3.2-11B-Vision-Instruct food-visual-instructions food-VQA-benchmark

Code: https://github.com/bigai-ai/QA-Synthesizer

1. Download Data

You can load datasets using the datasets library:

from datasets import load_dataset

# Choose the task name from the list of available tasks
task_name = 'FoodSeg103'  # Options: 'Food101', 'FoodSeg103', 'Nutrition5K', 'Recipe1M'

# Load the dataset for the chosen task
data = load_dataset('AdaptLLM/food-VQA-benchmark', task_name, split='test')

print(list(data)[0])

The mapping between category names and indices for Food101, FoodSeg103, and Nutrition5K datasets is provided in the following files:

Click to expand
  • Food101: food101_name_to_label_map.json
  • FoodSeg103: foodSeg103_id2label.json
  • Nutrition5K: nutrition5k_ingredients.py

Example Usages:

Food101

import json

# Load the mapping file
map_path = 'food101_name_to_label_map.json'
name_to_label_map = json.load(open(map_path))
name_to_label_map = {key.replace('_', ' '): value for key, value in name_to_label_map.items()}

# Reverse mapping: label to name
label_to_name_map = {value: key for key, value in name_to_label_map.items()}

FoodSeg103

import json

# Load the mapping file
map_path = 'foodSeg103_id2label.json'
id2name_map = json.load(open(map_path))

# Remove background and irrelevant labels
id2name_map.pop("0")  # Background
id2name_map.pop("103")  # Other ingredients

# Convert keys to integers
id2name_map = {int(key): value for key, value in id2name_map.items()}

# Create reverse mapping: name to ID
name2id_map = {value: key for key, value in id2name_map.items()}

Nutrition5K

from nutrition5k_ingredients import all_ingredients

# Create mappings
id2name_map = dict(zip(range(0, len(all_ingredients)), all_ingredients))
name2id_map = {value: key for key, value in id2name_map.items()}

2. Evaluate Any MLLM Compatible with vLLM on the Food Benchmarks

We provide a guide to directly evaluate MLLMs such as LLaVA-v1.6 (open-source version), Qwen2-VL-Instruct, and Llama-3.2-Vision-Instruct.
To evaluate other MLLMs, refer to this guide for modifying the BaseTask class in the vllm_inference/utils/task.py file.
Feel free reach out to us for assistance!

The dataset loading script is embedded in the inference code, so you can directly run the following commands to evaluate MLLMs.

1) Setup

Install vLLM using pip or from source.

As recommended in the official vLLM documentation, install vLLM in a fresh new conda environment:

conda create -n vllm python=3.10 -y
conda activate vllm
pip install vllm  # Ensure vllm>=0.6.2 for compatibility with Llama-3.2. If Llama-3.2 is not used, vllm==0.6.1 is sufficient.

Clone the repository and navigate to the inference directory:

git clone https://github.com/bigai-ai/QA-Synthesizer.git
cd QA-Synthesizer/vllm_inference
RESULTS_DIR=./eval_results  # Directory for saving evaluation scores

2) Evaluate

Run the following commands:

# Specify the domain: choose from ['food', 'Recipe1M', 'Nutrition5K', 'Food101', 'FoodSeg103']
# 'food' runs inference on all food tasks; others run on individual tasks.
DOMAIN='food'

# Specify the model type: choose from ['llava', 'qwen2_vl', 'mllama']
# For LLaVA-v1.6, Qwen2-VL, and Llama-3.2-Vision-Instruct, respectively.
MODEL_TYPE='qwen2_vl'

# Set the model repository ID on Hugging Face. Examples:
# "Qwen/Qwen2-VL-2B-Instruct", "AdaptLLM/food-Qwen2-VL-2B-Instruct" for MLLMs based on Qwen2-VL-Instruct.
# "meta-llama/Llama-3.2-11B-Vision-Instruct", "AdaptLLM/food-Llama-3.2-11B-Vision-Instruct" for MLLMs based on Llama-3.2-Vision-Instruct.
# "AdaptLLM/food-LLaVA-NeXT-Llama3-8B" for MLLMs based on LLaVA-v1.6.
MODEL=AdaptLLM/food-Qwen2-VL-2B-Instruct

# Set the directory for saving model prediction outputs:
OUTPUT_DIR=./output/AdaMLLM-food-Qwen-2B_${DOMAIN}

# Run inference with data parallelism; adjust CUDA devices as needed:
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR}

Detailed scripts to reproduce our results are in Evaluation.md

3) Results

The evaluation results are stored in ./eval_results, and the model prediction outputs are in ./output.

Citation

If you find our work helpful, please cite us.

AdaMLLM

@article{adamllm,
  title={On Domain-Specific Post-Training for Multimodal Large Language Models},
  author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
  journal={arXiv preprint arXiv:2411.19930},
  year={2024}
}

Instruction Pre-Training (EMNLP 2024)

@article{cheng2024instruction,
  title={Instruction Pre-Training: Language Models are Supervised Multitask Learners},
  author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu},
  journal={arXiv preprint arXiv:2406.14491},
  year={2024}
}

Adapt LLM to Domains (ICLR 2024)

@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}