mistral-nemo-gutenberg-12B-v2

axolotl-ai-co/romulus-mistral-nemo-12b-simpo finetuned on jondurbin/gutenberg-dpo-v0.1.

Method

Finetuned using an A100 on Google Colab for 1 epoch.

Fine-tune Llama 3 with ORPO

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 24.05
IFEval (0-Shot) 62.03
BBH (3-Shot) 34.73
MATH Lvl 5 (4-Shot) 2.11
GPQA (0-shot) 3.69
MuSR (0-shot) 13.99
MMLU-PRO (5-shot) 27.77
Downloads last month
173
Safetensors
Model size
12.2B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for nbeerbower/mistral-nemo-gutenberg-12B-v2

Finetuned
(2)
this model
Merges
8 models
Quantizations
4 models

Dataset used to train nbeerbower/mistral-nemo-gutenberg-12B-v2

Spaces using nbeerbower/mistral-nemo-gutenberg-12B-v2 5

Collection including nbeerbower/mistral-nemo-gutenberg-12B-v2

Evaluation results