Barcenas-14b-Phi-3-medium-ORPO

Model trained with the innovative ORPO method, based on the robust VAGOsolutions/SauerkrautLM-Phi-3-medium.

The model was trained with the dataset: mlabonne/orpo-dpo-mix-40k, which combines diverse data sources to enhance conversational capabilities and contextual understanding.

Made with โค๏ธ in Guadalupe, Nuevo Leon, Mexico ๐Ÿ‡ฒ๐Ÿ‡ฝ

Downloads last month
8,338
Safetensors
Model size
14B params
Tensor type
FP16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO

Finetunes
1 model
Merges
1 model
Quantizations
1 model

Dataset used to train Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO

Space using Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO 1