Update README.md
Browse files
README.md
CHANGED
@@ -42,7 +42,7 @@ We investigate domain adaptation of MLLMs through post-training, focusing on dat
|
|
42 |
**Code**: [https://github.com/bigai-ai/QA-Synthesizer](https://github.com/bigai-ai/QA-Synthesizer)
|
43 |
|
44 |
## 1. To Chat with AdaMLLM
|
45 |
-
Our model architecture aligns with the base model: Qwen-2-VL-Instruct. We provide a usage example below, and you may refer to the official [Qwen-2-VL-Instruct repository](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct
|
46 |
|
47 |
**Note:** For AdaMLLM, always place the image at the beginning of the input instruction in the messages.
|
48 |
|
|
|
42 |
**Code**: [https://github.com/bigai-ai/QA-Synthesizer](https://github.com/bigai-ai/QA-Synthesizer)
|
43 |
|
44 |
## 1. To Chat with AdaMLLM
|
45 |
+
Our model architecture aligns with the base model: Qwen-2-VL-Instruct. We provide a usage example below, and you may refer to the official [Qwen-2-VL-Instruct repository](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) for more advanced usage instructions.
|
46 |
|
47 |
**Note:** For AdaMLLM, always place the image at the beginning of the input instruction in the messages.
|
48 |
|