File size: 3,481 Bytes
9b91602
fe2dd84
85cab3d
 
 
9b91602
85cab3d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fe2dd84
85cab3d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: apache-2.0
datasets:
- adamo1139/AEZAKMI_v2
- adamo1139/rawrr_v1
---
## Model Description

EXPERIMENTAL MODEL, NOT FINAL, IT HAS SOME ISSUES, I DIDN'T TEST IT TOO MUCH YET


Yi-34B 200K base model fine-tuned on RAWrr v1 dataset via DPO and then fine-tuned on AEZAKMI v2 dataset via SFT. DPO training took around 6 hours, SFT took around 25 hours.
I used `unsloth` for both stages.
It's like airoboros but with less gptslop, no refusals and less typical language used by RLHFed OpenAI models.
Say goodbye to  "It's important to remember"! \
Prompt format is standard chatml. Don't expect it to be good at math, riddles or be crazy smart. My end goal with AEZAKMI is to create a cozy free chatbot.
Base model used for fine-tuning was 200k context Yi-34B-Llama model shared by larryvrh.

Training was done with max_position_embeddings set at 4096. Then it was reverted back to 200K after applying LoRA.

## Prompt Format

I recommend using ChatML format, as this was used during fine-tune. \
Here's a prompt format you should use, you can set a different system message, model seems to respect that fine, so it wasn't overfitted.
Both `A chat.` and `A chat with uncensored assistant.` system prompt work fine and are pretty refusal-free.

```
<|im_start|>system
A chat with uncensored assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```

## Intended uses & limitations

It's a chat model, not a base completion-only one.
Use is limited by apache-2.0 license. Since no-robots dataset was used for making rawrr_v1, I guess you maybe shouldn't use it for commercial activities.

## Known Issues

I recommend to set repetition penalty to something around 1.05 to avoid repetition. So far I had somewhat good experience running this model with temperature 1.0-1.2.

It seems like the strongest anti-refusal bias is at 0 ctx - the first prompt. But it's also present, albeit a little bit less, further down. I plan to expand rawrr dataset and include more samples without system prompt, this should help here.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" alt="made with Unsloth" width="400" height="64"/>](https://github.com/unslothai/unsloth)


## Unsloth training parameters DPO Stage

- lora_r: 16
- lora_alpha: 32
- max_length: 500
- learning_rate: 0.00005
- lr_scheduler_type: "linear"
- target_modules: ["q_proj", "k_proj", "v_proj", "o_proj",
                      "gate_proj", "up_proj", "down_proj",]
- gradient_accumulation_steps: 16
- per_device_batch_size: 1
- num_train_epochs: 1

  Script used for DPO training can be found here:
  https://huggingface.co/adamo1139/Yi-34B-200K-rawrr1-LORA-DPO-experimental-r3/blob/main/yi-34b-dpo-unsloth-1.py

## Unsloth training parameters SFT Stage

- lora_r: 16
- lora_alpha: 32
- max_length: 2400
- learning_rate: 0.000095
- lr_scheduler_type: "cosine"
- lr_scheduler_kwargs: {
    "num_cycles" : 0.25,
  }
- target_modules: ["q_proj", "k_proj", "v_proj", "o_proj",
                      "gate_proj", "up_proj", "down_proj",]
- gradient_accumulation_steps: 1
- per_device_batch_size: 1
- num_train_epochs: 2

  Script used for SFT training can be found here (older run, different hyperparameters):
  https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-RAW-2301-LoRA/blob/main/yi-34b-aezakmi-sft-1-hf.py

  ### Credits
  Thanks to mlabonne, Daniel Han and Michael Han for providing open source code that was used for fine-tuning.