image/png

Qwen2.5-Gutenberg-Doppel-32B

Qwen/Qwen2.5-32B-Instruct finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.

Method

ORPO tuned with 2x A100 for 1.25 epochs.

Downloads last month
1
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for waldie/Qwen2.5-Gutenberg-Doppel-32B-4bpw-h6-exl2

Base model

Qwen/Qwen2.5-32B
Quantized
(13)
this model

Datasets used to train waldie/Qwen2.5-Gutenberg-Doppel-32B-4bpw-h6-exl2