File size: 1,424 Bytes
f243c21
 
 
 
 
 
 
 
 
 
 
 
 
 
52c83d3
 
f243c21
 
 
 
 
dc051b8
f243c21
 
 
dc051b8
f243c21
 
dc051b8
f243c21
 
 
 
 
 
b432889
5bce45f
 
 
 
 
 
 
 
 
 
b432889
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
# RLHF (Beta)

### Overview

Reinforcement Learning from Human Feedback is a method whereby a language model is optimized from data using human
feedback. Various methods include, but not limited to:

- Proximal Policy Optimization (PPO) (not yet supported in axolotl)
- Direct Preference Optimization (DPO)
- Identity Preference Optimization (IPO)


### RLHF using Axolotl

>[!IMPORTANT]
>This is a BETA feature and many features are not fully implemented. You are encouraged to open new PRs to improve the integration and functionality.

The various RL training methods are implemented in trl and wrapped via axolotl. Below are various examples with how you can use various preference datasets to train models that use ChatML

#### DPO
```yaml
rl: dpo
datasets:
  - path: Intel/orca_dpo_pairs
    split: train
    type: chatml.intel
  - path: argilla/ultrafeedback-binarized-preferences
    split: train
    type: chatml.argilla
```

#### IPO
```yaml
rl: ipo
```

#### Using local dataset files
```yaml
datasets:
  - ds_type: json
    data_files:
      - orca_rlhf.jsonl
    split: train
    type: chatml.intel
```

#### Trl autounwrap for peft

Trl supports autounwrapping peft models, so that a ref model does not need to be additionally loaded, leading to less VRAM needed. This is on by default. To turn it off, pass the following config.

```yaml
# load ref model when adapter training.
rl_adapter_ref_model: true
```