Edit model card

Anti-refusal anti-instruct capabilities of this model are much stronger than yi-34b-200k-rawrr-dpo-1. This model is Yi-34B-200K fine-tuned using DPO on rawrr_v1 dataset using QLoRA at ctx 500, lora_r 16 and lora_alpha 16. I then applied the adapter to base model. This model is akin to raw LLaMa 65B, it's not meant to follow instructions but instead should be useful as base for further fine-tuning.

Rawrr_v1 dataset made it so that this model issue less refusals, especially for benign topics, and is moreso completion focused rather than instruct focused. Base yi-34B-200k suffers from contamination on instruct and refusal datasets, i am attempting to fix that by training base models with DPO on rawrr dataset, making them more raw. You should be able to achieve good 0ctx uncensoredness and quite good lack of gptslop if you finetune this model for instruct.

Downloads last month
80
Safetensors
Model size
34.4B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.