yahma commited on
Commit
003c2a4
·
1 Parent(s): 10faee1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -7,15 +7,15 @@ This repo contains a low-rank adapter for LLaMA-7b fit on the Cleaned Alpaca dat
7
 
8
  This version of the weights was trained with the following hyperparameters:
9
 
10
- Cleaned dataset: Snapshot March 31, 2023
11
  Epochs: 3
12
- Validation set size: 2000
13
  Batch size: 128
14
- Micro batch size: 12
15
  Cutoff length: 512
16
  Learning rate: 3e-4
17
- Lora r: 8
18
- Lora target modules: q_proj, v_proj
19
 
20
  That is:
21
 
@@ -25,7 +25,7 @@ python finetune.py \
25
  --num_epochs=3 \
26
  --cutoff_len=512 \
27
  --output_dir='./lora-alpaca' \
28
- --lora_target_modules='[q_proj,v_proj]' \
29
- --lora_r=8 \
30
- --val_set_size 2000 \
31
- --micro_batch_size=12
 
7
 
8
  This version of the weights was trained with the following hyperparameters:
9
 
10
+ Cleaned dataset: Snapshot April 2, 2023
11
  Epochs: 3
12
+ Validation set size: 1500
13
  Batch size: 128
14
+ Micro batch size: 8
15
  Cutoff length: 512
16
  Learning rate: 3e-4
17
+ Lora r: 16
18
+ Lora target modules: q_proj, k_proj, v_proj, o_proj
19
 
20
  That is:
21
 
 
25
  --num_epochs=3 \
26
  --cutoff_len=512 \
27
  --output_dir='./lora-alpaca' \
28
+ --lora_target_modules='[q_proj,k_proj, v_proj, o_proj]' \
29
+ --lora_r=16 \
30
+ --val_set_size 1500 \
31
+ --micro_batch_size=8