--- license: apache-2.0 datasets: - FreedomIntelligence/ALLaVA-4V - Vision-Flan/vision-flan_191-task_1k language: - en base_model: - Lin-Chen/open-llava-next-llama3-8b --- # Adapting Multimodal Large Language Models to Domains via Post-Training This repos contains the **visual-instruction synthesizer** in our paper: [On Domain-Specific Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930). The main project page is: [Adapt-MLLM-to-Domains](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains/edit/main/README.md) We investigate domain adaptation of MLLMs through post-training, focusing on data synthesis, training pipelines, and task evaluation. **(1) Data Synthesis**: Using open-source models, we develop a visual instruction synthesizer that effectively generates diverse visual instruction tasks from domain-specific image-caption pairs. **Our synthetic tasks surpass those generated by manual rules, GPT-4, and GPT-4V in enhancing the domain-specific performance of MLLMs.** **(2) Training Pipeline**: While the two-stage training--initially on image-caption pairs followed by visual instruction tasks--is commonly adopted for developing general MLLMs, we apply a single-stage training pipeline to enhance task diversity for domain-specific post-training. **(3) Task Evaluation**: We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales (e.g., Qwen2-VL-2B, LLaVA-v1.6-8B, Llama-3.2-11B), and then evaluating MLLM performance on various domain-specific tasks.
## How to use To synthesize an "instruction-informative response-precise response" triplet based on the following image-caption pair.
```python from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration import torch from PIL import Image import requests # Define your input image-caption pair here: ## image url = "https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/mgI_Ayj12_Q_kviWvfAVb.jpeg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") ## Caption caption = "Dish: Strawberry Waffles\n\nSteps to prepare:\na). Preheat and grease a waffle iron according to manufacturer's instructions.\nb). Sift flour, baking powder, and salt together in a bowl. Whisk buttermilk, yogurt, butter, eggs, and sugar together in a separate bowl; stir into flour mixture until batter is smooth. Fold strawberries into batter.\nc). Pour about 1/3 cup batter into preheated waffle iron; cook until lightly browned, 5 to 7 minutes. Repeat with remaining batter.\n\nIngredients you'll need:\n(a). 2 1/2 cups all-purpose flour\n(b). 4 teaspoons baking powder\n(c). 3/4 teaspoon salt\n(d). 2 cups buttermilk\n(e). 1/2 cup vanilla Greek-style yogurt\n(f). 1/2 cup butter, melted\n(g). 2 eggs, beaten\n(h). 1 1/2 tablespoons white sugar\n(i). 3/4 cup chopped strawberries, or more to taste" # Path to synthesizer model_path = "AdaptLLM/visual-instruction-synthesizer" # =========================== Do NOT need to modify the following =============================== # Prompt Hints caption_hint = "Describe the image." precise_hint = "Answer with a precise response.\n" informative_hint = "Answer with an informative response.\n" # Function to parse predictions def parse_pred(pred): if not pred.endswith("<|end_of_text|>"): return [] pred = pred[:-len("<|end_of_text|>")] QA_str_list = pred.split("<|start_header_id|>user<|end_header_id|>\n\n") if not pred.endswith("<|eot_id|>"): QA_str_list = QA_str_list[:-1] QA_list = [] for QA_str in QA_str_list: try: assert "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" in QA_str Q_str, A_str = QA_str.split("<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n") Q_str, A_str = Q_str.strip(), A_str[:-len("<|eot_id|>")].strip() assert Q_str and A_str QA_list.append({"Q": Q_str, "A": A_str}) except AssertionError: pass # Skip invalid entries conversations = [] for qa_entry in QA_list: conversations.append({"from": "human", "value": qa_entry["Q"]}) conversations.append({"from": "gpt", "value": qa_entry["A"]}) return conversations # Function to extract task triplets def get_task_triplet(pred): pred_QAs = parse_pred(pred) precise_QAs = {} informative_QAs = {} collected_QA = None for idx in range(0, len(pred_QAs), 2): # Iterate over question-answer pairs question = pred_QAs[idx]["value"] answer = pred_QAs[idx + 1]["value"] if question.startswith(precise_hint): precise_q = question[len(precise_hint):] if precise_q in informative_QAs: collected_QA = { "Q": precise_q, "precise_A": answer, "informative_A": informative_QAs[precise_q], } break else: precise_QAs[precise_q] = answer elif question.startswith(informative_hint): informative_q = question[len(informative_hint):] if informative_q in precise_QAs: collected_QA = { "Q": informative_q, "precise_A": precise_QAs[informative_q], "informative_A": answer, } break else: informative_QAs[informative_q] = answer return collected_QA # Load the processor processor = LlavaNextProcessor.from_pretrained(model_path) # Define image token image_token = "<|reserved_special_token_4|>" # Format the prompt prompt = ( f"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n" f"You are a helpful language and vision assistant. You are able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language." f"<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n" f"{image_token}\n{caption_hint}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" f"{caption}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n" ) # Load the model model = LlavaNextForConditionalGeneration.from_pretrained(model_path, torch_dtype=torch.float16, device_map="auto") # Prepare inputs and generate output inputs = processor(images=image, text=prompt, return_tensors="pt").to(model.device) answer_start = int(inputs["input_ids"].shape[-1]) output = model.generate(**inputs, max_new_tokens=512) # Decode predictions pred = processor.decode(output[0][answer_start:], skip_special_tokens=False) print(f"## Synthesizer predictions:\n{pred}") # Extract task triplets task_triplet = get_task_triplet(pred) print(f"## Synthesized Task triplet:\n{task_triplet}") ``` ## Citation If you find our work helpful, please cite us. AdaMLLM ```bibtex @article{adamllm, title={On Domain-Specific Post-Training for Multimodal Large Language Models}, author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang}, journal={arXiv preprint arXiv:2411.19930}, year={2024} } ``` [Instruction Pre-Training](https://huggingface.co/papers/2406.14491) (EMNLP 2024) ```bibtex @article{cheng2024instruction, title={Instruction Pre-Training: Language Models are Supervised Multitask Learners}, author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu}, journal={arXiv preprint arXiv:2406.14491}, year={2024} } ```