File size: 3,999 Bytes
c3d32f7
 
1692ef8
c3d32f7
 
 
 
 
 
 
 
 
 
8d5b2c5
c3d32f7
 
 
 
1692ef8
c3d32f7
1692ef8
c3d32f7
 
1692ef8
4c60476
1692ef8
4c60476
 
 
c3d32f7
 
 
 
 
7ae61ea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3d32f7
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
license: apache-2.0
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: peft
tags:
- llama-factory
- lora
datasets:
  - Nekochu/Luminia-mixture
language:
  - en
---

Fine-tuning of ‘Llama-3.1-8B’ with a focus on RP and uncensored.

<details>
  <summary>This training can be replicated using LLaMA-Factory. </summary>

Stage A: SFT
```
set CUDA_VISIBLE_DEVICES=0 && llamafactory-cli train --stage sft --do_train True --model_name_or_path meta-llama/Meta-Llama-3.1-8B-Instruct --preprocessing_num_workers 16 --finetuning_type lora --template alpaca --rope_scaling linear --flash_attn fa2 --dataset_dir data --dataset psy_mental_health,faproulette_co-OCR-fixer,ascii_art,Uncensored_DAN,Lumimaid-v2,Degrees_of_Lewdity,qa-unc-sft --cutoff_len 8192 --learning_rate 5e-05 --num_train_epochs 1.0 --max_samples 100000 --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 10 --save_steps 1000 --warmup_steps 1000 --neftune_noise_alpha 5 --optim adamw_8bit --packing True --neat_packing True --report_to none --output_dir saves\LLaMA3.1-8B-Chat\lora\Luminia-8B-RP --bf16 True --plot_loss True --ddp_timeout 180000000 --include_num_input_tokens_seen True --quantization_bit 4 --quantization_method bitsandbytes --lora_rank 32 --lora_alpha 64 --lora_dropout 0.15  --lora_target all --use_adam_mini True --create_new_adapter True
```

Stage B: Continued, `orpo`
```
set CUDA_VISIBLE_DEVICES=0 && llamafactory-cli train --stage dpo --do_train True --model_name_or_path meta-llama/Meta-Llama-3.1-8B-Instruct --preprocessing_num_workers 16 --finetuning_type lora --template alpaca --rope_scaling linear --flash_attn fa2 --dataset_dir data --dataset qa-unc-dpo --cutoff_len 4000 --learning_rate 5e-05 --num_train_epochs 1.0 --max_samples 100000 --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 10 --save_steps 1000 --warmup_steps 0 --neftune_noise_alpha 5 --optim adamw_8bit --packing True --report_to none --output_dir saves\LLaMA3.1-8B-Chat\lora\Luminia-8B-RP-DPO --bf16 True --plot_loss True --ddp_timeout 180000000 --include_num_input_tokens_seen True --quantization_bit 4 --quantization_method bitsandbytes --lora_rank 32 --lora_alpha 64 --lora_dropout 0.35 --lora_target all --pref_beta 0.1 --pref_ftx 0 --pref_loss orpo --adapter_name_or_path saves\LLaMA3.1-8B-Chat\lora\Luminia-8B-RP
```


<details>
  <summary>dataset_info.json</summary>

`dataset_info.json`:
```json
{
  "psy_mental_health": {
    "file_name": "psy_mental_health.json",
    "formatting": "alpaca",
    "columns": {
      "prompt": "instruction",
      "query": "input",
      "response": "output",
      "system": "system",
      "history": "history"
    }
  },
  "Uncensored_DAN": {
    "file_name": "Uncensored_DAN.json",
    "formatting": "alpaca"
  },
  "faproulette_co-OCR-fixer": {
    "file_name": "faproulette_co-OCR-fix-gpt4o_qa_fixer.json",
    "formatting": "alpaca"
  },
  "faproulette_co-OCR-fix-gpt4o_qa": {
    "file_name": "faproulette_co-OCR-fix-gpt4o_qa.json",
    "formatting": "alpaca"
  },
  "ascii_art": {
    "file_name": "ascii_art.json",
    "formatting": "alpaca"
  },
  "Lumimaid-v2": {
    "file_name": "Lumimaid-v2.json",
    "formatting": "alpaca",
    "columns": {
      "prompt": "instruction",
      "query": "input",
      "response": "output",
      "system": "system",
      "history": "history"
    }
  },
  "Degrees_of_Lewdity": {
    "file_name": "Degrees_of_Lewdity_Story-v0.4-5.json",
    "formatting": "alpaca"
  },
  "qa-unc-sft": {
    "file_name": "qa-unc-dpo.json",
    "formatting": "alpaca",
    "columns": {
      "prompt": "instruction",
      "response": "chosen"
    }
  },
  "qa-unc-dpo": {
    "file_name": "qa-unc-dpo.json",
    "ranking": true,
    "columns": {
      "prompt": "instruction",
      "query": "input",
      "chosen": "chosen",
      "rejected": "rejected"
    }
  },
}
```
</details>

</details>