Merged Model
This is a merge of pre-trained language models created using mergekit.
This model is currently ranked #1 on the Open LLM Leaderboard among models up to 10B parameters and #2 among models up to 14B parameters!
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Falcon3-7B-Instruct
Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B.
This repository contains the Falcon3-7B-Instruct. It achieves state of art results (at the time of release) on reasoning, language understanding, instruction following, code and mathematics tasks. Falcon3-7B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length up to 32K.
Configuration
The following YAML configuration was used to produce this model:
base_model: neopolita/jessi-v0.4-falcon3-7b-instruct
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 28]
model: tiiuae/Falcon3-7B-Instruct
- layer_range: [0, 28]
model: neopolita/jessi-v0.4-falcon3-7b-instruct
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 35.23 |
IFEval (0-Shot) | 76.76 |
BBH (3-Shot) | 37.29 |
MATH Lvl 5 (4-Shot) | 34.59 |
GPQA (0-shot) | 8.28 |
MuSR (0-shot) | 20.49 |
MMLU-PRO (5-shot) | 34.00 |
- Downloads last month
- 149
Model tree for suayptalha/Falcon3-Jessi-v0.4-7B-Slerp
Collection including suayptalha/Falcon3-Jessi-v0.4-7B-Slerp
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard76.760
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard37.290
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard34.590
- acc_norm on GPQA (0-shot)Open LLM Leaderboard8.280
- acc_norm on MuSR (0-shot)Open LLM Leaderboard20.490
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard34.000