mistral-orpo-mix / README.md
alvarobartt's picture
alvarobartt HF staff
Update README.md
f8d8203 verified
metadata
language:
  - en
license: apache-2.0
library_name: transformers
tags:
  - orpo
  - trl
datasets:
  - alvarobartt/dpo-mix-7k-simplified
base_model: mistralai/Mistral-7B-v0.1
pipeline_tag: text-generation
inference: false

ORPO fine-tune of Mistral 7B v0.1 with DPO Mix 7K

image/jpeg

Stable Diffusion XL "A capybara, a killer whale, and a robot named Ultra being friends"

This is an ORPO fine-tune of mistralai/Mistral-7B-v0.1 with alvarobartt/dpo-mix-7k-simplified.

⚠️ Note that the code is still experimental, as the ORPOTrainer PR is still not merged, follow its progress at 🤗trl - ORPOTrainer PR.

Reference

ORPO: Monolithic Preference Optimization without Reference Model