Text Generation
PEFT
Safetensors
trl
dpo
unsloth
conversational
Edit model card

Mistral-SLERP-Merged7B-DPO

What is DPO ?

image/png

Direct Preference Optimization (DPO) is an algorithm introduced in order to achieve precise control of the behavior of large-scale unsupervised language models (LMs). It is a parameterization of the reward model in Reinforcement Learning from Human Feedback (RLHF) that enables the extraction of the corresponding optimal policy in closed form. This allows for the solution of the standard RLHF problem with only a simple classification loss.

DPO eliminates the need for sampling from the LM during fine-tuning or performing significant hyperparameter tuning, making it stable, performant, and computationally lightweight. Experiments have shown that DPO can fine-tune LMs to align with human preferences as well as or better than existing methods. It has been found to be particularly effective in controlling the sentiment of generations and matching or improving response quality in summarization and single-turn dialogue.

Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ayoubkirouane/Mistral-SLERP-Merged7B-DPO

Adapter
(1)
this model

Datasets used to train ayoubkirouane/Mistral-SLERP-Merged7B-DPO