This model is for the reproduction of results on Safe-RLHF dataset of paper "The crucial role of samplers in online direct preference optimization". Iteration 2 of DPO-mixp algorithm, trained on https://huggingface.co/zhezi12138/alpaca-7b-iter-1.

Downloads last month
5
Safetensors
Model size
6.61B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for zhezi12138/alpaca-7b-iter-2-mixp

Finetuned
(4)
this model

Dataset used to train zhezi12138/alpaca-7b-iter-2-mixp