File size: 1,739 Bytes
67bb0e9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: visual-question-answering
---
# MMEvol Model Card
## Model Details
Here are the pretrained weights and instruction tuning weights
| Model | Pretrained Projector | Base LLM | PT Data | IT Data | Download |
| ---------------- | -------------------- | --------- | ------------------------------------------------------------ | ------- | -------- |
| MMEvol-LLaMA3-8B | [mm_projector](https://huggingface.co/Tongyi-ConvAI/MMEvol/tree/main/llama3) | LLaMA3-8B | [LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) | MMEvol | [ckpt](https://huggingface.co/Tongyi-ConvAI/MMEvol/tree/main/llama3)|
## Performance
### VLMEvalKit Support (OpenCompass)
| Model | MME_C | MMStar | HallBench | MathVista_mini | MMMU_val | AI2D | POPE | BLINK | RWQA |
| ---------------- | ----- | ------ | --------- | -------------- | -------- | ---- | ---- | ----- | ---- |
| MMEvol-LLaMA3-8B | 47.8 | 50.1 | 62.3 | 50.0 | 40.8 | 73.9 | 86.8 | 46.4 | 62.6 |
### VLMEvalKit Not Support (VQADataSet)
| Model | VQA_v2 | GQA | MIA | MMSInst |
| ---------------- | ------ | ---- | ---- | ------- |
| MMEvol-LLaMA3-8B | 83.4 | 65.0 | 78.8 | 32.3 |
## Paper or resources for more information
- Page: https://mmevol.github.io/
- arXiv: https://arxiv.org/pdf/2409.05840
## License
Llama 3 is licensed under the LLAMA 3 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
## Contact us if you have any questions
- Run Luo — r.luo@siat.ac.cn
- Haonan Zhang — zchiowal@gmail.com |