File size: 816 Bytes
f3cae41
6a515d0
fe62c74
 
 
 
 
f3cae41
fe62c74
67d26fb
fe62c74
67d26fb
2b2965d
 
 
184e02e
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
---
license: apache-2.0
tags:
- finetune
- fine-tune
datasets:
- adamo1139/rawrr_v1
---

NEW STRONGER RAWRR FINETUNE COMING SOON! 

This model is Yi-34B-200K fine-tuned using DPO on rawrr_v1 dataset using QLoRA at ctx 200, lora_r 4 and lora_alpha 8. I then merged the adapter with base model.
This model is akin to raw LLaMa 65B, it's not meant to follow instructions but instead should be useful as base for further fine-tuning.

Rawrr_v1 dataset made it so that this model issue less refusals, especially for benign topics, and is moreso completion focused rather than instruct focused.
Base yi-34B-200k suffers from contamination on instruct and refusal datasets, i am attempting to fix that by training base models with DPO on rawrr dataset, making them more raw. 

License: 
yi-license + non-commercial use only