AdaptLLM commited on
Commit
4afe2b4
1 Parent(s): 3a08804

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -3
README.md CHANGED
@@ -1,3 +1,79 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - visual-question-answering
4
+ language:
5
+ - en
6
+ tags:
7
+ - Vision
8
+ - food
9
+ - recipe
10
+ configs:
11
+ - config_name: Food101
12
+ data_files:
13
+ - split: test
14
+ path: food101/data-*.arrow
15
+ - config_name: FoodSeg103
16
+ data_files:
17
+ - split: test
18
+ path: foodseg103/data-*.arrow
19
+ - config_name: Nutrition5K
20
+ data_files:
21
+ - split: test
22
+ path: nutriton50k/data-*.arrow
23
+ - config_name: Recipe1M
24
+ data_files:
25
+ - split: test
26
+ path: food_eval_multitask_v2/data-*.arrow
27
+ ---
28
+
29
+ # Adapting Multimodal Large Language Models to Domains via Post-Training
30
+
31
+ This repos contains the **food visual instruction tasks for evaluating MLLMs** in our paper: [On Domain-Specific Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930).
32
+
33
+ The main project page is: [Adapt-MLLM-to-Domains](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains/edit/main/README.md)
34
+
35
+ We investigate domain adaptation of MLLMs through post-training, focusing on data synthesis, training pipelines, and task evaluation.
36
+ **(1) Data Synthesis**: Using open-source models, we develop a visual instruction synthesizer that effectively generates diverse visual instruction tasks from domain-specific image-caption pairs. **Our synthetic tasks surpass those generated by manual rules, GPT-4, and GPT-4V in enhancing the domain-specific performance of MLLMs.**
37
+ **(2) Training Pipeline**: While the two-stage training--initially on image-caption pairs followed by visual instruction tasks--is commonly adopted for developing general MLLMs, we apply a single-stage training pipeline to enhance task diversity for domain-specific post-training.
38
+ **(3) Task Evaluation**: We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales (e.g., Qwen2-VL-2B, LLaVA-v1.6-8B, Llama-3.2-11B), and then evaluating MLLM performance on various domain-specific tasks.
39
+
40
+ <p align='left'>
41
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/-Jp7pAsCR2Tj4WwfwsbCo.png" width="600">
42
+ </p>
43
+
44
+ ## How to use
45
+
46
+ ```python
47
+ from datasets import load_dataset
48
+
49
+ # Choose the task name from the list of available tasks
50
+ task_name = 'Food101' # Options: 'Food101', 'FoodSeg103', 'Nutrition5K', 'Recipe1M'
51
+
52
+ # Load the dataset for the chosen task
53
+ data = load_dataset('AdaptLLM/food-vision-language-tasks', task_name, split='test')
54
+ ```
55
+
56
+ ## Citation
57
+ If you find our work helpful, please cite us.
58
+
59
+ AdaMLLM
60
+ ```bibtex
61
+ @article{adamllm,
62
+ title={On Domain-Specific Post-Training for Multimodal Large Language Models},
63
+ author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
64
+ journal={arXiv preprint arXiv:2411.19930},
65
+ year={2024}
66
+ }
67
+ ```
68
+
69
+ [AdaptLLM](https://huggingface.co/papers/2309.09530) (ICLR 2024)
70
+ ```bibtex
71
+ @inproceedings{
72
+ adaptllm,
73
+ title={Adapting Large Language Models via Reading Comprehension},
74
+ author={Daixuan Cheng and Shaohan Huang and Furu Wei},
75
+ booktitle={The Twelfth International Conference on Learning Representations},
76
+ year={2024},
77
+ url={https://openreview.net/forum?id=y886UXPEZ0}
78
+ }
79
+ ```