Luminia-8B-RP / README.md
Nekochu's picture
Add dataset_info
7ae61ea verified
|
raw
history blame
4 kB
---
license: apache-2.0
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: peft
tags:
- llama-factory
- lora
datasets:
- Nekochu/Luminia-mixture
language:
- en
---
Fine-tuning of ‘Llama-3.1-8B’ with a focus on RP and uncensored.
<details>
<summary>This training can be replicated using LLaMA-Factory. </summary>
Stage A: SFT
```
set CUDA_VISIBLE_DEVICES=0 && llamafactory-cli train --stage sft --do_train True --model_name_or_path meta-llama/Meta-Llama-3.1-8B-Instruct --preprocessing_num_workers 16 --finetuning_type lora --template alpaca --rope_scaling linear --flash_attn fa2 --dataset_dir data --dataset psy_mental_health,faproulette_co-OCR-fixer,ascii_art,Uncensored_DAN,Lumimaid-v2,Degrees_of_Lewdity,qa-unc-sft --cutoff_len 8192 --learning_rate 5e-05 --num_train_epochs 1.0 --max_samples 100000 --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 10 --save_steps 1000 --warmup_steps 1000 --neftune_noise_alpha 5 --optim adamw_8bit --packing True --neat_packing True --report_to none --output_dir saves\LLaMA3.1-8B-Chat\lora\Luminia-8B-RP --bf16 True --plot_loss True --ddp_timeout 180000000 --include_num_input_tokens_seen True --quantization_bit 4 --quantization_method bitsandbytes --lora_rank 32 --lora_alpha 64 --lora_dropout 0.15 --lora_target all --use_adam_mini True --create_new_adapter True
```
Stage B: Continued, `orpo`
```
set CUDA_VISIBLE_DEVICES=0 && llamafactory-cli train --stage dpo --do_train True --model_name_or_path meta-llama/Meta-Llama-3.1-8B-Instruct --preprocessing_num_workers 16 --finetuning_type lora --template alpaca --rope_scaling linear --flash_attn fa2 --dataset_dir data --dataset qa-unc-dpo --cutoff_len 4000 --learning_rate 5e-05 --num_train_epochs 1.0 --max_samples 100000 --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 10 --save_steps 1000 --warmup_steps 0 --neftune_noise_alpha 5 --optim adamw_8bit --packing True --report_to none --output_dir saves\LLaMA3.1-8B-Chat\lora\Luminia-8B-RP-DPO --bf16 True --plot_loss True --ddp_timeout 180000000 --include_num_input_tokens_seen True --quantization_bit 4 --quantization_method bitsandbytes --lora_rank 32 --lora_alpha 64 --lora_dropout 0.35 --lora_target all --pref_beta 0.1 --pref_ftx 0 --pref_loss orpo --adapter_name_or_path saves\LLaMA3.1-8B-Chat\lora\Luminia-8B-RP
```
<details>
<summary>dataset_info.json</summary>
`dataset_info.json`:
```json
{
"psy_mental_health": {
"file_name": "psy_mental_health.json",
"formatting": "alpaca",
"columns": {
"prompt": "instruction",
"query": "input",
"response": "output",
"system": "system",
"history": "history"
}
},
"Uncensored_DAN": {
"file_name": "Uncensored_DAN.json",
"formatting": "alpaca"
},
"faproulette_co-OCR-fixer": {
"file_name": "faproulette_co-OCR-fix-gpt4o_qa_fixer.json",
"formatting": "alpaca"
},
"faproulette_co-OCR-fix-gpt4o_qa": {
"file_name": "faproulette_co-OCR-fix-gpt4o_qa.json",
"formatting": "alpaca"
},
"ascii_art": {
"file_name": "ascii_art.json",
"formatting": "alpaca"
},
"Lumimaid-v2": {
"file_name": "Lumimaid-v2.json",
"formatting": "alpaca",
"columns": {
"prompt": "instruction",
"query": "input",
"response": "output",
"system": "system",
"history": "history"
}
},
"Degrees_of_Lewdity": {
"file_name": "Degrees_of_Lewdity_Story-v0.4-5.json",
"formatting": "alpaca"
},
"qa-unc-sft": {
"file_name": "qa-unc-dpo.json",
"formatting": "alpaca",
"columns": {
"prompt": "instruction",
"response": "chosen"
}
},
"qa-unc-dpo": {
"file_name": "qa-unc-dpo.json",
"ranking": true,
"columns": {
"prompt": "instruction",
"query": "input",
"chosen": "chosen",
"rejected": "rejected"
}
},
}
```
</details>
</details>