tabtoyou commited on
Commit
e709fcf
1 Parent(s): 9ae177c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -13
README.md CHANGED
@@ -1,21 +1,22 @@
1
  ---
2
  license: cc-by-nc-4.0
3
  ---
4
- KoLLaVA : Korean Large Language and Vision Assistant (feat. LLaVA)
5
  This model is a large multimodal model (LMM) that combines the LLM(LLaMA-2-7b-ko) with visual encoder of CLIP(ViT-14), trained on Korean visual-instruction dataset using QLoRA.
6
 
7
  Detail codes are available at [KoLLaVA](https://github.com/tabtoyou/KoLLaVA/tree/main) github repository
8
 
9
- Training hyperparameters
10
- learning rate : 2e-4
11
- train_batch_size: 16
12
- distributed_type: multi-GPU (RTX3090 24G)
13
- num_devices: 4
14
- gradient_accumulation_steps: 2
15
- total_train_batch_size: 128
16
- total_eval_batch_size: 4
17
- lr_scheduler_type: cosine
18
- num_epochs: 1
19
- lora_enable: True
20
- bits: 4
 
21
  Model License: cc-by-nc-4.0
 
1
  ---
2
  license: cc-by-nc-4.0
3
  ---
4
+ ## KoLLaVA : Korean Large Language and Vision Assistant (feat. LLaVA)
5
  This model is a large multimodal model (LMM) that combines the LLM(LLaMA-2-7b-ko) with visual encoder of CLIP(ViT-14), trained on Korean visual-instruction dataset using QLoRA.
6
 
7
  Detail codes are available at [KoLLaVA](https://github.com/tabtoyou/KoLLaVA/tree/main) github repository
8
 
9
+ - Training hyperparameters
10
+ - learning rate : 2e-4
11
+ - train_batch_size: 16
12
+ - distributed_type: multi-GPU (RTX3090 24G)
13
+ - num_devices: 4
14
+ - gradient_accumulation_steps: 2
15
+ - total_train_batch_size: 128
16
+ - total_eval_batch_size: 4
17
+ - lr_scheduler_type: cosine
18
+ - num_epochs: 1
19
+ - lora_enable: True
20
+ - bits: 4
21
+
22
  Model License: cc-by-nc-4.0