merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the TIES merge method using Qwen/Qwen2.5-32B as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Qwen/Qwen2.5-32B-Instruct
parameters:
weight: 1
density: 1
merge_method: ties
base_model: Qwen/Qwen2.5-32B
parameters:
weight: 1
density: 1
normalize: true
int8_mask: true
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 44.32 |
IFEval (0-Shot) | 68.54 |
BBH (3-Shot) | 58.11 |
MATH Lvl 5 (4-Shot) | 43.13 |
GPQA (0-shot) | 17.45 |
MuSR (0-shot) | 24.13 |
MMLU-PRO (5-shot) | 54.56 |
- Downloads last month
- 292
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Sakalti/ultiima-32B
Space using Sakalti/ultiima-32B 1
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard68.540
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard58.110
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard43.130
- acc_norm on GPQA (0-shot)Open LLM Leaderboard17.450
- acc_norm on MuSR (0-shot)Open LLM Leaderboard24.130
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard54.560