Experimental de-slopped, de-aligned, EQ-tuned model trained via ORPO on 4k synthetic pairs on a single A100 for 3 epochs; inspired by Gutenberg-DPO.

Despite success on the de-slopping front, I seem to have totalled the model's prefrontal cortex in the process. So it goes. Training data is everything.

Downloads last month
13
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for steinzer-narayan/caudwell-9b-v0

Quantizations
1 model