Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,22 @@ We investigate domain adaptation of MLLMs through post-training, focusing on dat
|
|
25 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/-Jp7pAsCR2Tj4WwfwsbCo.png" width="600">
|
26 |
</p>
|
27 |
|
28 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
```python
|
31 |
from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration
|
@@ -40,7 +55,7 @@ image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
|
|
40 |
|
41 |
instruction = "What's in the image?"
|
42 |
|
43 |
-
model_path='AdaptLLM/
|
44 |
|
45 |
# =========================== Do NOT need to modify the following ===============================
|
46 |
# Load the processor
|
@@ -70,6 +85,10 @@ pred = processor.decode(output[0][answer_start:], skip_special_tokens=True)
|
|
70 |
print(pred)
|
71 |
```
|
72 |
|
|
|
|
|
|
|
|
|
73 |
## Citation
|
74 |
If you find our work helpful, please cite us.
|
75 |
|
|
|
25 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/-Jp7pAsCR2Tj4WwfwsbCo.png" width="600">
|
26 |
</p>
|
27 |
|
28 |
+
## Resources
|
29 |
+
**🤗 We share our data and models with example usages, feel free to open any issues or discussions! 🤗**
|
30 |
+
|
31 |
+
| Model | Repo ID in HF 🤗 | Domain | Base Model | Training Data | Evaluation Benchmark |
|
32 |
+
|:----------------------------------------------------------------------------|:--------------------------------------------|:--------------|:-------------------------|:------------------------------------------------------------------------------------------------|-----------------------|
|
33 |
+
| [Visual Instruction Synthesizer](https://huggingface.co/AdaptLLM/visual-instruction-synthesizer) | AdaptLLM/visual-instruction-synthesizer | - | open-llava-next-llama3-8b | VisionFLAN and ALLaVA | - |
|
34 |
+
| [AdaMLLM-med-2B](https://huggingface.co/AdaptLLM/biomed-Qwen2-VL-2B-Instruct) | AdaptLLM/biomed-Qwen2-VL-2B-Instruct | Biomedicine | Qwen2-VL-2B-Instruct | [biomed-visual-instructions](https://huggingface.co/datasets/AdaptLLM/biomed-visual-instructions) | [biomed-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/biomed-VQA-benchmark) |
|
35 |
+
| [AdaMLLM-food-2B](https://huggingface.co/AdaptLLM/food-Qwen2-VL-2B-Instruct) | AdaptLLM/food-Qwen2-VL-2B-Instruct | Food | Qwen2-VL-2B-Instruct | [food-visual-instructions](https://huggingface.co/datasets/AdaptLLM/food-visual-instructions) | [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark) |
|
36 |
+
| [AdaMLLM-med-8B](https://huggingface.co/AdaptLLM/biomed-LLaVA-NeXT-Llama3-8B) | AdaptLLM/biomed-LLaVA-NeXT-Llama3-8B | Biomedicine | open-llava-next-llama3-8b | [biomed-visual-instructions](https://huggingface.co/datasets/AdaptLLM/biomed-visual-instructions) | [biomed-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/biomed-VQA-benchmark) |
|
37 |
+
| [AdaMLLM-food-8B](https://huggingface.co/AdaptLLM/food-LLaVA-NeXT-Llama3-8B) |AdaptLLM/food-LLaVA-NeXT-Llama3-8B | Food | open-llava-next-llama3-8b | [food-visual-instructions](https://huggingface.co/datasets/AdaptLLM/food-visual-instructions) | [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark) |
|
38 |
+
| [AdaMLLM-med-11B](https://huggingface.co/AdaptLLM/biomed-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/biomed-Llama-3.2-11B-Vision-Instruct | Biomedicine | Llama-3.2-11B-Vision-Instruct | [biomed-visual-instructions](https://huggingface.co/datasets/AdaptLLM/biomed-visual-instructions) | [biomed-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/biomed-VQA-benchmark) |
|
39 |
+
| [AdaMLLM-food-11B](https://huggingface.co/AdaptLLM/food-Llama-3.2-11B-Vision-Instruct) | AdaptLLM/food-Llama-3.2-11B-Vision-Instruct | Food | Llama-3.2-11B-Vision-Instruct | [food-visual-instructions](https://huggingface.co/datasets/AdaptLLM/food-visual-instructions) | [food-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/food-VQA-benchmark) |
|
40 |
+
|
41 |
+
**Code**: [https://github.com/bigai-ai/QA-Synthesizer](https://github.com/bigai-ai/QA-Synthesizer)
|
42 |
+
|
43 |
+
## 1. To Chat with AdaMLLM
|
44 |
|
45 |
```python
|
46 |
from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration
|
|
|
55 |
|
56 |
instruction = "What's in the image?"
|
57 |
|
58 |
+
model_path='AdaptLLM/biomed-LLaVA-NeXT-Llama3-8B'
|
59 |
|
60 |
# =========================== Do NOT need to modify the following ===============================
|
61 |
# Load the processor
|
|
|
85 |
print(pred)
|
86 |
```
|
87 |
|
88 |
+
2. To Evaluate AdaMLLM on the domain-spefic Benchmarks:
|
89 |
+
|
90 |
+
See [biomed-VQA-benchmark](https://huggingface.co/datasets/AdaptLLM/biomed-VQA-benchmark) to reproduce our results and evalaute more MLLMs on the domain-specific benchmarks.
|
91 |
+
|
92 |
## Citation
|
93 |
If you find our work helpful, please cite us.
|
94 |
|