File size: 768 Bytes
dca21f1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
This repo contains a low-rank adapter for LLaMA-7b fit on the Stanford Alpaca dataset.
This version of the weights was trained with the following hyperparameters:
[Cleaned dataset](https://github.com/gururise/AlpacaDataCleaned): Snapshot March 31, 2023
Epochs: 3
Validation set size: 2000
Batch size: 128
Micro batch size: 12
Cutoff length: 512
Learning rate: 3e-4
Lora r: 8
Lora target modules: q_proj, v_proj
That is:
python finetune.py \
--base_model='decapoda-research/llama-7b-hf' \
--data_path 'yahma/alpaca-cleaned' \
--num_epochs=3 \
--cutoff_len=512 \
--output_dir='./lora-alpaca' \
--lora_target_modules='[q_proj,v_proj]' \
--lora_r=8 \
--val_set_size 2000 \
--micro_batch_size=12
|