Merge Models
Collection
Models I merged using mergekit library
•
2 items
•
Updated
•
2
This is a merge of pre-trained language models created using mergekit.
This model is currently ranked #1 on the Open LLM Leaderboard among models up to 13B parameters!
This model was merged using the SLERP merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 28]
model: ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
- layer_range: [0, 28]
model: ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 34.62 |
IFEval (0-Shot) | 78.08 |
BBH (3-Shot) | 36.98 |
MATH Lvl 5 (4-Shot) | 31.04 |
GPQA (0-shot) | 8.61 |
MuSR (0-shot) | 14.73 |
MMLU-PRO (5-shot) | 38.28 |