Mistral-7B
Collection
5 items
•
Updated
•
1
This is a merge of pre-trained language models created using mergekit.
GGUF quants:
This model was merged using the following methods:
The following models were included in the merge:
The following YAML configurations were used to produce this model:
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [0, 24]
- sources:
- model: SanjiWatsuki/Silicon-Maid-7B
layer_range: [8, 24]
- sources:
- model: KatyTheCutie/LemonadeRP-4.5.3
layer_range: [24, 32]
merge_method: passthrough
dtype: bfloat16
name: Big-Lemon-Cookie-11B-BF16
---
models:
- model: Big-Lemon-Cookie-11B-BF16
parameters:
weight: 0.85
- model: Sao10K/Fimbulvetr-11B-v2
parameters:
weight: 0.15
merge_method: task_arithmetic
base_model: Big-Lemon-Cookie-11B-BF16
dtype: bfloat16
name: Chewy-Lemon-Cookie-11B
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 21.91 |
IFEval (0-Shot) | 48.75 |
BBH (3-Shot) | 33.01 |
MATH Lvl 5 (4-Shot) | 4.61 |
GPQA (0-shot) | 3.91 |
MuSR (0-shot) | 15.95 |
MMLU-PRO (5-shot) | 25.19 |