merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: 01-ai/Yi-1.5-34B-Chat
layer_range:
- 0
- 60
- model: BattlescarZa/medibuddy-llm-34B
layer_range:
- 0
- 60
merge_method: slerp
base_model: 01-ai/Yi-1.5-34B-Chat
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.38
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 27.70 |
| IFEval (0-Shot) | 42.35 |
| BBH (3-Shot) | 42.81 |
| MATH Lvl 5 (4-Shot) | 12.24 |
| GPQA (0-shot) | 14.09 |
| MuSR (0-shot) | 15.97 |
| MMLU-PRO (5-shot) | 38.77 |
- Downloads last month
- 9
Model tree for allknowingroger/Yibuddy-35B
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard42.350
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard42.810
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard12.240
- acc_norm on GPQA (0-shot)Open LLM Leaderboard14.090
- acc_norm on MuSR (0-shot)Open LLM Leaderboard15.970
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard38.770