base_model:
- ibm/merlinite-7b
library_name: transformers
tags:
- mergekit
- merge
- GGUF
license: apache-2.0
Excalibur-7b GGUF
Image generated with Envoid's Model9 SDXL model
FP16 can be found here
Magic-Dolphin-7b was an unexpected surprise. Profoundly satisfied with it as a first attempt. For this follow-up I wanted to target the MMLU benchmark specifically. The challenge this time was placing more weight on Merlinite-7b as an unknown quantity that hasn't been in the spotlight despite its novel LAB tuning method.
Excalibur-7b builds on past success and is the culmination of several learnings:
- Measuring KL-divergences for new quantization types brought a deeper understanding of benchmarking and assessing model performance
- This signifcantly sped up the testing process by using MMLU as a base, narrowing down over 10 candidate linear merges to 1: merliniteX-blockB1
- Reaching the limitations of linear merging necessitated a pivot to reviewing the viability of SLERP, DARE-TIES, and Passthrough methods
- Thus a competing candidate merge pool was tested between different merge algorithms. Once more the list was narrowed from 10 candidates to 1: merliniteX-blockF2
- merliniteX-blockF2 (SLERP of Magic-Dolphin-7B and jaskier-7b-dpo in unorthadox proportions) was originally planned for release earlier this week
- Instead -blockB1 and -blockF2 were merged and the results were placed head to head in a final round of tests. Ultimately a more conventional execution of SLERP showed the best results for the final step.
Sample Question
Bonus Question - Vision Capabilities
Requires additional mistral-7b-mmproj-v1.5-Q4_1.gguf file for vision functionality
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
- models/merliniteX-blockB1
- models/merliniteX-blockF2
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: models/merliniteX-blockF2
layer_range: [0, 32]
- model: models/merliniteX-blockB1
layer_range: [0, 32]
# or, the equivalent models: syntax:
# models:
# - model: psmathur/orca_mini_v3_13b
# - model: garage-bAInd/Platypus2-13B
merge_method: slerp
base_model: models/merliniteX-blockF2
parameters:
t:
- filter: self_attn
value: [1, 0.7, 0.3, 0.5, 0]
- filter: mlp
value: [0, 0.3, 0.7, 0.5, 1]
- value: 0.5 # fallback for rest of tensors
dtype: float16