metadata
license: cc-by-nc-4.0
datasets:
- meta-math/MetaMathQA
language:
- en
pipeline_tag: text-generation
tags:
- Math
- merge
base_model:
- Q-bert/MetaMath-Cybertron
- berkeley-nest/Starling-LM-7B-alpha
MetaMath-Cybertron-Starling
Merge Q-bert/MetaMath-Cybertron and berkeley-nest/Starling-LM-7B-alpha using slerp merge.
You can use ChatML format.
Open LLM Leaderboard Evaluation Results
Detailed results can be found Here
Metric | Value |
---|---|
Avg. | 71.35 |
ARC (25-shot) | 67.75 |
HellaSwag (10-shot) | 86.23 |
MMLU (5-shot) | 65.24 |
TruthfulQA (0-shot) | 55.94 |
Winogrande (5-shot) | 81.45 |
GSM8K (5-shot) | 71.49 |