Edit model card

MetaMath-Cybertron-Starling

Merge Q-bert/MetaMath-Cybertron and berkeley-nest/Starling-LM-7B-alpha using slerp merge.

You can use ChatML format.

Open LLM Leaderboard Evaluation Results

Detailed results can be found Here

Metric Value
Avg. 71.35
ARC (25-shot) 67.75
HellaSwag (10-shot) 86.23
MMLU (5-shot) 65.24
TruthfulQA (0-shot) 55.94
Winogrande (5-shot) 81.45
GSM8K (5-shot) 71.49
Downloads last month
907
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for Q-bert/MetaMath-Cybertron-Starling

Merge model
this model
Finetunes
2 models
Merges
3 models
Quantizations
4 models

Dataset used to train Q-bert/MetaMath-Cybertron-Starling