Edit model card
  • DPO Trainer with dataset jondurbin/truthy-dpo-v0.1 to improve [TomGrc/FusionNet_34Bx2_MoE]
    
    

DPO Trainer TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.

* Metrics improved by DPO 
![Metrsc improment](34bx2-dpo.jpg)
Downloads last month
460
Safetensors
Model size
60.8B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.