Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,30 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- tabtoyou/KoLLaVA-Instruct-150k
|
5 |
+
- tabtoyou/KoLLaVA-CC3M-Pretrain-595K
|
6 |
+
language:
|
7 |
+
- ko
|
8 |
+
library_name: transformers
|
9 |
+
tags:
|
10 |
+
- LLaVA
|
11 |
+
- KoVicuna
|
12 |
+
- KoLLaVA
|
13 |
+
- KoAlpaca
|
14 |
---
|
15 |
+
|
16 |
+
## KoLLaVA : Korean Large Language and Vision Assistant (feat. LLaVA)
|
17 |
+
This model is fine-tuned version of [KoVicuna](https://huggingface.co/junelee/ko_vicuna_7b) on a KoLLaVA Dataset
|
18 |
+
|
19 |
+
Detail codes are available at [KoLLaVA github repository](https://github.com/tabtoyou/KoLLaVA)
|
20 |
+
|
21 |
+
### Training hyperparameters
|
22 |
+
* learning rate : 2e-5
|
23 |
+
* train_batch_size: 16
|
24 |
+
* distributed_type: multi-GPU (A100 80G)
|
25 |
+
* num_devices: 4
|
26 |
+
* gradient_accumulation_steps: 1
|
27 |
+
* total_train_batch_size: 64
|
28 |
+
* total_eval_batch_size: 16
|
29 |
+
* lr_scheduler_type: cosine
|
30 |
+
* num_epochs: 1
|