This is a finetuned version of Mistral Instruct v0.2 with Undi95's toxicsharegpt-NoWarning.jsonl. (while writing this I realize that it is not the DPO version but the NoWarning version! OOps!)
The finetuned was made at 8K context length in LoRA using axolotl.
The goal of this finetune was to check the validity of a 32K context despite a training at a way much lower context length (8k).
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.