Flammades-Mistral-Nemo-12B
nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B-v2 finetuned on flammenai/Date-DPO-NoAsterisks and jondurbin/truthy-dpo-v0.1.
Method
ORPO tuned with 2x RTX 3090 for 3 epochs.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value | 
|---|---|
| Avg. | 22.34 | 
| IFEval (0-Shot) | 38.42 | 
| BBH (3-Shot) | 32.39 | 
| MATH Lvl 5 (4-Shot) | 6.19 | 
| GPQA (0-shot) | 7.16 | 
| MuSR (0-shot) | 20.31 | 
| MMLU-PRO (5-shot) | 29.57 | 
- Downloads last month
- 9
Model tree for flammenai/Flammades-Mistral-Nemo-12B
Base model
winglian/m12b-20240721-test010Datasets used to train flammenai/Flammades-Mistral-Nemo-12B
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard38.420
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard32.390
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard6.190
- acc_norm on GPQA (0-shot)Open LLM Leaderboard7.160
- acc_norm on MuSR (0-shot)Open LLM Leaderboard20.310
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard29.570

