--- library_name: transformers tags: [] --- # MarcoroCapy-7B This model is a DPO fine tune of [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) on [argilla/distilabel-capybara-dpo-7k-binarized](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized)
![image/webp](https://cdn-uploads.huggingface.co/production/uploads/6455cc8d679315e4ef16fbec/oey_JDcpqQ0Lw-7KH0AIE.webp) [Built with Distilabel](https://github.com/argilla-io/distilabel)
## Process + Realigned the chat template to ChatML + Completed 1 Epoch + 5e-5 learning rate + Training time was about 4.5 hours on 1 H100 + Cost was ~$20 ## GGUF TODO ## Evaluations TODO