Safetensors
English
llava_next
AdaptLLM commited on
Commit
72ce980
1 Parent(s): 24e08b1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +180 -1
README.md CHANGED
@@ -1,8 +1,187 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
4
 
5
  <p align='left'>
6
- <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/eut1QtM5vi7tZyD5zvlr7.jpeg" width="600">
7
  </p>
8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - FreedomIntelligence/ALLaVA-4V
5
+ - Vision-Flan/vision-flan_191-task_1k
6
+ language:
7
+ - en
8
+ base_model:
9
+ - Lin-Chen/open-llava-next-llama3-8b
10
  ---
11
+ # Adapting Multimodal Large Language Models to Domains via Post-Training
12
+
13
+ This repos contains the **visual-instruction synthesizer** in our paper: [On Domain-Specific Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930).
14
+
15
+ The main project page is: [Adapt-MLLM-to-Domains](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains/edit/main/README.md)
16
+
17
+ We investigate domain adaptation of MLLMs through post-training, focusing on data synthesis, training pipelines, and task evaluation.
18
+ **(1) Data Synthesis**: Using open-source models, we develop a visual instruction synthesizer that effectively generates diverse visual instruction tasks from domain-specific image-caption pairs. **Our synthetic tasks surpass those generated by manual rules, GPT-4, and GPT-4V in enhancing the domain-specific performance of MLLMs.**
19
+ **(2) Training Pipeline**: While the two-stage training--initially on image-caption pairs followed by visual instruction tasks--is commonly adopted for developing general MLLMs, we apply a single-stage training pipeline to enhance task diversity for domain-specific post-training.
20
+ **(3) Task Evaluation**: We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales (e.g., Qwen2-VL-2B, LLaVA-v1.6-8B, Llama-3.2-11B), and then evaluating MLLM performance on various domain-specific tasks.
21
 
22
  <p align='left'>
23
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/bRu85CWwP9129bSCRzos2.png" width="1000">
24
  </p>
25
 
26
+ ## How to use
27
+ To synthesize an "instruction-informative response-precise response" triplet based on the following image-caption pair.
28
+
29
+ <p align='left'>
30
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/mgI_Ayj12_Q_kviWvfAVb.jpeg" width="300">
31
+ </p>
32
+
33
+ ```python
34
+ from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration
35
+ import torch
36
+ from PIL import Image
37
+ import requests
38
+
39
+ # Define your input image-caption pair here:
40
+ ## image
41
+ url = "https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/mgI_Ayj12_Q_kviWvfAVb.jpeg"
42
+ image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
43
+
44
+ ## Caption
45
+ caption = "Dish: Strawberry Waffles\n\nSteps to prepare:\na). Preheat and grease a waffle iron according to manufacturer's instructions.\nb). Sift flour, baking powder, and salt together in a bowl. Whisk buttermilk, yogurt, butter, eggs, and sugar together in a separate bowl; stir into flour mixture until batter is smooth. Fold strawberries into batter.\nc). Pour about 1/3 cup batter into preheated waffle iron; cook until lightly browned, 5 to 7 minutes. Repeat with remaining batter.\n\nIngredients you'll need:\n(a). 2 1/2 cups all-purpose flour\n(b). 4 teaspoons baking powder\n(c). 3/4 teaspoon salt\n(d). 2 cups buttermilk\n(e). 1/2 cup vanilla Greek-style yogurt\n(f). 1/2 cup butter, melted\n(g). 2 eggs, beaten\n(h). 1 1/2 tablespoons white sugar\n(i). 3/4 cup chopped strawberries, or more to taste"
46
+
47
+ # Path to synthesizer
48
+ model_path = "AdaptLLM/visual-instruction-synthesizer"
49
+
50
+
51
+ # =========================== Do NOT need to modify the following ===============================
52
+
53
+ # Prompt Hints
54
+ caption_hint = "Describe the image."
55
+ precise_hint = "Answer with a precise response.\n"
56
+ informative_hint = "Answer with an informative response.\n"
57
+
58
+ # Function to parse predictions
59
+ def parse_pred(pred):
60
+ if not pred.endswith("<|end_of_text|>"):
61
+ return []
62
+
63
+ pred = pred[:-len("<|end_of_text|>")]
64
+
65
+ QA_str_list = pred.split("<|start_header_id|>user<|end_header_id|>\n\n")
66
+ if not pred.endswith("<|eot_id|>"):
67
+ QA_str_list = QA_str_list[:-1]
68
+
69
+ QA_list = []
70
+ for QA_str in QA_str_list:
71
+ try:
72
+ assert "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" in QA_str
73
+ Q_str, A_str = QA_str.split("<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n")
74
+ Q_str, A_str = Q_str.strip(), A_str[:-len("<|eot_id|>")].strip()
75
+ assert Q_str and A_str
76
+ QA_list.append({"Q": Q_str, "A": A_str})
77
+ except AssertionError:
78
+ pass # Skip invalid entries
79
+
80
+ conversations = []
81
+ for qa_entry in QA_list:
82
+ conversations.append({"from": "human", "value": qa_entry["Q"]})
83
+ conversations.append({"from": "gpt", "value": qa_entry["A"]})
84
+ return conversations
85
+
86
+ # Function to extract task triplets
87
+ def get_task_triplet(pred):
88
+ pred_QAs = parse_pred(pred)
89
+ precise_QAs = {}
90
+ informative_QAs = {}
91
+ collected_QA = None
92
+
93
+ for idx in range(0, len(pred_QAs), 2): # Iterate over question-answer pairs
94
+ question = pred_QAs[idx]["value"]
95
+ answer = pred_QAs[idx + 1]["value"]
96
+ if question.startswith(precise_hint):
97
+ precise_q = question[len(precise_hint):]
98
+ if precise_q in informative_QAs:
99
+ collected_QA = {
100
+ "Q": precise_q,
101
+ "precise_A": answer,
102
+ "informative_A": informative_QAs[precise_q],
103
+ }
104
+ break
105
+ else:
106
+ precise_QAs[precise_q] = answer
107
+ elif question.startswith(informative_hint):
108
+ informative_q = question[len(informative_hint):]
109
+ if informative_q in precise_QAs:
110
+ collected_QA = {
111
+ "Q": informative_q,
112
+ "precise_A": precise_QAs[informative_q],
113
+ "informative_A": answer,
114
+ }
115
+ break
116
+ else:
117
+ informative_QAs[informative_q] = answer
118
+
119
+ return collected_QA
120
+
121
+ # Load the processor
122
+ processor = LlavaNextProcessor.from_pretrained(model_path)
123
+
124
+ # Define image token
125
+ image_token = "<|reserved_special_token_4|>"
126
+
127
+ # Format the prompt
128
+ prompt = (
129
+ f"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n"
130
+ f"You are a helpful language and vision assistant. You are able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language."
131
+ f"<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"
132
+ f"{image_token}\n{caption_hint}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
133
+ f"{caption}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"
134
+ )
135
+
136
+ # Load the model
137
+ model = LlavaNextForConditionalGeneration.from_pretrained(model_path, torch_dtype=torch.float16, device_map="auto")
138
+
139
+ # Prepare inputs and generate output
140
+ inputs = processor(images=image, text=prompt, return_tensors="pt").to(model.device)
141
+ answer_start = int(inputs["input_ids"].shape[-1])
142
+ output = model.generate(**inputs, max_new_tokens=512)
143
+
144
+ # Decode predictions
145
+ pred = processor.decode(output[0][answer_start:], skip_special_tokens=False)
146
+ print(f"## Synthesizer predictions:\n{pred}")
147
+
148
+ # Extract task triplets
149
+ task_triplet = get_task_triplet(pred)
150
+ print(f"## Synthesized Task triplet:\n{task_triplet}")
151
+ ```
152
+
153
+
154
+ ## Citation
155
+ If you find our work helpful, please cite us.
156
+
157
+ AdaMLLM
158
+ ```bibtex
159
+ @article{adamllm,
160
+ title={On Domain-Specific Post-Training for Multimodal Large Language Models},
161
+ author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
162
+ journal={arXiv preprint arXiv:2411.19930},
163
+ year={2024}
164
+ }
165
+ ```
166
+
167
+ [Instruction Pre-Training](https://huggingface.co/papers/2406.14491) (EMNLP 2024)
168
+ ```bibtex
169
+ @article{cheng2024instruction,
170
+ title={Instruction Pre-Training: Language Models are Supervised Multitask Learners},
171
+ author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu},
172
+ journal={arXiv preprint arXiv:2406.14491},
173
+ year={2024}
174
+ }
175
+ ```
176
+
177
+ [Adapt LLM to Domains](https://huggingface.co/papers/2309.09530) (ICLR 2024)
178
+ ```bibtex
179
+ @inproceedings{
180
+ cheng2024adapting,
181
+ title={Adapting Large Language Models via Reading Comprehension},
182
+ author={Daixuan Cheng and Shaohan Huang and Furu Wei},
183
+ booktitle={The Twelfth International Conference on Learning Representations},
184
+ year={2024},
185
+ url={https://openreview.net/forum?id=y886UXPEZ0}
186
+ }
187
+ ```