--- task_categories: - visual-question-answering language: - en tags: - Vision - food - recipe configs: - config_name: Recipe1M data_files: - split: test path: food_eval_multitask_v2/data-*.arrow - config_name: Nutrition5K data_files: - split: test path: nutrition50k/data-*.arrow - config_name: Food101 data_files: - split: test path: food101/data-*.arrow - config_name: FoodSeg103 data_files: - split: test path: foodseg103/data-*.arrow --- # Adapting Multimodal Large Language Models to Domains via Post-Training This repos contains the **food visual instruction tasks for evaluating MLLMs** in our paper: [On Domain-Specific Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930). The main project page is: [Adapt-MLLM-to-Domains](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains/edit/main/README.md) We investigate domain adaptation of MLLMs through post-training, focusing on data synthesis, training pipelines, and task evaluation. **(1) Data Synthesis**: Using open-source models, we develop a visual instruction synthesizer that effectively generates diverse visual instruction tasks from domain-specific image-caption pairs. **Our synthetic tasks surpass those generated by manual rules, GPT-4, and GPT-4V in enhancing the domain-specific performance of MLLMs.** **(2) Training Pipeline**: While the two-stage training--initially on image-caption pairs followed by visual instruction tasks--is commonly adopted for developing general MLLMs, we apply a single-stage training pipeline to enhance task diversity for domain-specific post-training. **(3) Task Evaluation**: We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales (e.g., Qwen2-VL-2B, LLaVA-v1.6-8B, Llama-3.2-11B), and then evaluating MLLM performance on various domain-specific tasks.