Edit model card

nbeerbower/flammen17-py-DPO-v1-7B AWQ

image/png

Model Summary

A Mistral 7B LLM built from merging pretrained models and finetuning on Jon Durbin's py-dpo-v0.1.

Finetuned using an A100 on Google Colab. 🙏

Fine-tune a Mistral-7b model with Direct Preference Optimization - Maxime Labonne

Downloads last month
15
Safetensors
Model size
1.2B params
Tensor type
I32
·
FP16
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for solidrust/flammen17-py-DPO-v1-7B-AWQ

Quantized
(2)
this model

Dataset used to train solidrust/flammen17-py-DPO-v1-7B-AWQ

Collection including solidrust/flammen17-py-DPO-v1-7B-AWQ