Edit model card

komorebi.png

This is a model based on a multi-phase process using KTO fine tuning using the jondurbin gutenberg approach, that results in 3 separate LoRAs which are merged in sequence.

The resulting model is exhibiting a significant decrease in Llama 3.1 slop outputs.

Experimental. Please give feedback. Begone if you demand perfection.

I did most of my testing with temp 1.4, min-p 0.15, DRY 0.8. I also did play with enabling XTC with threshold 0.1, prob 0.50.

As context grows, you may want to bump temp and min-p and maybe even DRY.

Downloads last month
70
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for crestf411/L3.1-8B-komorebi

Quantizations
11 models

Dataset used to train crestf411/L3.1-8B-komorebi

Collection including crestf411/L3.1-8B-komorebi