On Domain-Specific Post-Training for Multimodal Large Language Models
Abstract
Recent years have witnessed the rapid development of general multimodal large language models (MLLMs). However, adapting general MLLMs to specific domains, such as scientific fields and industrial applications, remains less explored. This paper systematically investigates domain adaptation of MLLMs through post-training, focusing on data synthesis, training pipelines, and task evaluation. (1) Data Synthesis: Using open-source models, we develop a visual instruction synthesizer that effectively generates diverse visual instruction tasks from domain-specific image-caption pairs. Our synthetic tasks surpass those generated by manual rules, GPT-4, and GPT-4V in enhancing the domain-specific performance of MLLMs. (2) Training Pipeline: While the two-stage training--initially on image-caption pairs followed by visual instruction tasks--is commonly adopted for developing general MLLMs, we apply a single-stage training pipeline to enhance task diversity for domain-specific post-training. (3) Task Evaluation: We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales (e.g., Qwen2-VL-2B, LLaVA-v1.6-8B, Llama-3.2-11B), and then evaluating MLLM performance on various domain-specific tasks. To support further research in MLLM domain adaptation, we will open-source our implementations.
Community
AdaMLLM represents our latest advancement in building domain-specific foundation models through post-training.
AdaptLLM: Adapt LLM to domains (biomedicine, finance, law, etc.)
We employ rule-based methods to extract tasks from domain-specific corpora, reformatting them into reading comprehension tasks for continued pre-training. Our 7B finance model outperforms domain-specific models of much larger scales, such as BloombergGPT-50B.AdaMLLM: Adapt Multimodal LLM to domains (biomedicine, food, etc.)
We extend supervised task synthesis to multimodality, introducing a unified visual instruction synthesizer to extract instruction-response pairs from domain-specific image-caption pairs. Our synthetic tasks outperform those generated by manual rules, GPT-4, and GPT-4V in improving domain-specific performance for MLLMs.
๐ค We share our data and models with example usages, feel free to open any issues or discussions! ๐ค
๐ Project Page: Adapt-MLLM-to-Domains
๐ง Code: https://github.com/bigai-ai/QA-Synthesizer
Model | Repo ID in HF ๐ค | Domain | Base Model | Training Data | Evaluation Benchmark |
---|---|---|---|---|---|
Visual Instruction Synthesizer | AdaptLLM/visual-instruction-synthesizer | - | open-llava-next-llama3-8b | VisionFLAN and ALLaVA | - |
AdaMLLM-med-2B | AdaptLLM/biomed-Qwen2-VL-2B-Instruct | Biomedicine | Qwen2-VL-2B-Instruct | biomed-visual-instructions | biomed-VQA-benchmark |
AdaMLLM-food-2B | AdaptLLM/food-Qwen2-VL-2B-Instruct | Food | Qwen2-VL-2B-Instruct | food-visual-instructions | food-VQA-benchmark |
AdaMLLM-med-8B | AdaptLLM/biomed-LLaVA-NeXT-Llama3-8B | Biomedicine | open-llava-next-llama3-8b | biomed-visual-instructions | biomed-VQA-benchmark |
AdaMLLM-food-8B | AdaptLLM/food-LLaVA-NeXT-Llama3-8B | Food | open-llava-next-llama3-8b | food-visual-instructions | food-VQA-benchmark |
AdaMLLM-med-11B | AdaptLLM/biomed-Llama-3.2-11B-Vision-Instruct | Biomedicine | Llama-3.2-11B-Vision-Instruct | biomed-visual-instructions | biomed-VQA-benchmark |
AdaMLLM-food-11B | AdaptLLM/food-Llama-3.2-11B-Vision-Instruct | Food | Llama-3.2-11B-Vision-Instruct | food-visual-instructions | food-VQA-benchmark |
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SemiHVision: Enhancing Medical Multimodal Models with a Semi-Human Annotated Dataset and Fine-Tuned Instruction Generation (2024)
- SimRAG: Self-Improving Retrieval-Augmented Generation for Adapting Large Language Models to Specialized Domains (2024)
- Infinity-MM: Scaling Multimodal Performance with Large-Scale and High-Quality Instruction Data (2024)
- From Generalist to Specialist: Adapting Vision Language Models via Task-Specific Visual Instruction Tuning (2024)
- MC-CoT: A Modular Collaborative CoT Framework for Zero-shot Medical-VQA with LLM and MLLM Integration (2024)
- HumanVLM: Foundation for Human-Scene Vision-Language Model (2024)
- Surgical-LLaVA: Toward Surgical Scenario Understanding via Large Language and Vision Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend