Create README.md
#1
by
Eithannak
- opened
README.md
CHANGED
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
datasets:
|
4 |
+
- liuhaotian/LLaVA-Pretrain
|
5 |
+
- liuhaotian/LLaVA-Instruct-150K
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
metrics:
|
9 |
+
- accuracy
|
10 |
+
- precision
|
11 |
+
- recall
|
12 |
+
- f1
|
13 |
+
base_model:
|
14 |
+
- apple/OpenELM
|
15 |
+
- apple/aimv2-large-patch14-224
|
16 |
+
pipeline_tag: image-text-to-text
|
17 |
+
tags:
|
18 |
+
- cpu
|
19 |
+
- nano
|
20 |
+
- small
|
21 |
+
- tiny
|
22 |
+
- llava
|
23 |
+
model_size: 0.6B parameters
|
24 |
+
---
|
25 |
+
|
26 |
+
**<center><span style="font-size:2em;">Tiny Llava 4 CPU π</span></center>**
|
27 |
+
|
28 |
+
[![License](https://img.shields.io/badge/License-MIT-brightgreen.svg)](https://opensource.org/licenses/MIT)
|
29 |
+
[![CPU](https://img.shields.io/badge/CPU-Supported-blue)](https://huggingface.co)
|
30 |
+
[![arXiv](https://img.shields.io/badge/arXiv-2402.14289-red)](https://arxiv.org/pdf/2402.14289)
|
31 |
+
|
32 |
+
---
|
33 |
+
|
34 |
+
### π **Model Overview**
|
35 |
+
`tiny-llava-open-elm-aimv2` is a lightweight image-text-to-text model that combines **[OpenELM](https://huggingface.co/apple/OpenELM)** as the LLM backbone and **[AIMv2-Large-Patch14-224](https://huggingface.co/apple/aimv2-large-patch14-224)** as the vision encoder. The model has been fine-tuned using **LoRA (Low-Rank Adaptation)** for efficient training. It was developed using the **[TinyLLaVA Factory](https://github.com/TinyLLaVA/TinyLLaVA_Factory)** codebase, which provides a modular framework for lightweight multi-modal models.
|
36 |
+
|
37 |
+
The model is designed to run efficiently on **CPU**, making it ideal for resource-constrained environments. It is trained and evaluated on **POPE** and **TextVQA** benchmarks. The total model size is **0.6B parameters**.
|
38 |
+
|
39 |
+
---
|
40 |
+
|
41 |
+
### π **Performance**
|
42 |
+
|
43 |
+
| Model Name | VQAv2 | GQA | SQA | TextVQA | MM-VET | POPE | MME | MMMU |
|
44 |
+
|:-----------------------------------------------------------:|:-----:|:-----:|:-----:|:-------:|:------:|:-----:|:------:|:-----:|
|
45 |
+
| [LLaVA-1.5-7B](https://huggingface.co/llava-hf/llava-1.5-7b-hf) | 78.5 | 62.0 | 66.8 | 58.2 | 30.5 | 85.9 | 1510.7 | - |
|
46 |
+
| [bczhou/TinyLLaVA-3.1B](https://huggingface.co/bczhou/TinyLLaVA-3.1B) | 79.9 | 62.0 | 69.1 | 59.1 | 32.0 | 86.4 | 1464.9 | - |
|
47 |
+
| [tinyllava/TinyLLaVA-Gemma-SigLIP-2.4B](https://huggingface.co/tinyllava/TinyLLaVA-Gemma-SigLIP-2.4B) | 78.4 | 61.6 | 64.4 | 53.6 | 26.9 | 86.4 | 1339.0 | 31.7 |
|
48 |
+
| [tinyllava/TinyLLaVA-Phi-2-SigLIP-3.1B](https://huggingface.co/tinyllava/TinyLLaVA-Phi-2-SigLIP-3.1B) | 80.1 | 62.1 | 73.0 | 60.3 | 37.5 | 87.2 | 1466.4 | 38.4 |
|
49 |
+
| tiny-llava-open-elm-aimv2 | - | - | - | 39.68 | - | 83.93 | - | - |
|
50 |
+
|
51 |
+
---
|
52 |
+
|
53 |
+
### π **References**
|
54 |
+
- [OpenELM](https://huggingface.co/apple/OpenELM)
|
55 |
+
- [AIMv2-Large-Patch14-224](https://huggingface.co/apple/aimv2-large-patch14-224)
|
56 |
+
- [TinyLLaVA Factory](https://github.com/TinyLLaVA/TinyLLaVA_Factory)
|
57 |
+
- [LoRA Paper (arXiv:2402.14289)](https://arxiv.org/pdf/2402.14289)
|