Safetensors
English
mllama
biology
medical
chemistry

Adapting Multimodal Large Language Models to Domains via Post-Training

This repos contains the biomedicine MLLM developed from Llama-3.2-11B-Vision-Instruct in our paper: On Domain-Specific Post-Training for Multimodal Large Language Models. The correspoding training dataset is in medicine-visual-instructions.

The main project page is: Adapt-MLLM-to-Domains

We investigate domain adaptation of MLLMs through post-training, focusing on data synthesis, training pipelines, and task evaluation. (1) Data Synthesis: Using open-source models, we develop a visual instruction synthesizer that effectively generates diverse visual instruction tasks from domain-specific image-caption pairs. Our synthetic tasks surpass those generated by manual rules, GPT-4, and GPT-4V in enhancing the domain-specific performance of MLLMs. (2) Training Pipeline: While the two-stage training--initially on image-caption pairs followed by visual instruction tasks--is commonly adopted for developing general MLLMs, we apply a single-stage training pipeline to enhance task diversity for domain-specific post-training. (3) Task Evaluation: We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales (e.g., Qwen2-VL-2B, LLaVA-v1.6-8B, Llama-3.2-11B), and then evaluating MLLM performance on various domain-specific tasks.

1. To Chat with AdaMLLM

Our model architecture aligns with the base model: Llama-3.2-Vision-Instruct. We provide a usage example below, and you may refer to the official Llama-3.2-Vision-Instruct Repository for more advanced usage instructions,

Note: For AdaMLLM, always place the image at the beginning of the input instruction in the messages.

Click to expand

Starting with transformers >= 4.45.0 onward, you can run inference using conversational messages that may include an image you can query about.

Make sure to update your transformers installation via pip install --upgrade transformers.

import requests
import torch
from PIL import Image
from transformers import MllamaForConditionalGeneration, AutoProcessor

model_id = "AdaptLLM/biomed-Llama-3.2-11B-Vision-Instruct"

model = MllamaForConditionalGeneration.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_id)

url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
image = Image.open(requests.get(url, stream=True).raw)

# NOTE: For AdaMLLM, always place the image at the beginning of the input instruction in the messages.  
messages = [
    {"role": "user", "content": [
        {"type": "image"},
        {"type": "text", "text": "If I had to write a haiku for this one, it would be: "}
    ]}
]
input_text = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(
    image,
    input_text,
    add_special_tokens=False,
    return_tensors="pt"
).to(model.device)

output = model.generate(**inputs, max_new_tokens=30)
print(processor.decode(output[0]))

2. To Evaluate Any MLLM on Domain-Specific Benchmarks

See biomed-VQA-benchmark to reproduce our results and evalaute more MLLMs on the domain-specific benchmarks.

Citation

If you find our work helpful, please cite us.

AdaMLLM

@article{adamllm,
  title={On Domain-Specific Post-Training for Multimodal Large Language Models},
  author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
  journal={arXiv preprint arXiv:2411.19930},
  year={2024}
}

Instruction Pre-Training (EMNLP 2024)

@article{cheng2024instruction,
  title={Instruction Pre-Training: Language Models are Supervised Multitask Learners},
  author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu},
  journal={arXiv preprint arXiv:2406.14491},
  year={2024}
}

Adapt LLM to Domains (ICLR 2024)

@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
Downloads last month
55
Safetensors
Model size
10.7B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for AdaptLLM/biomed-Llama-3.2-11B-Vision-Instruct

Finetuned
(70)
this model

Dataset used to train AdaptLLM/biomed-Llama-3.2-11B-Vision-Instruct