here be dragons
Collection
32 items
•
Updated
•
2
This is a merge of pre-trained language models created using mergekit.
This model was merged using the SCE merge method using DreadPoor/Aspire-8B-model_stock as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: refuelai/Llama-3-Refueled #high BBH score
- model: vicgalle/Configurable-Llama-3.1-8B-Instruct #High IFEval score
- model: johnsutor/Llama-3-8B-Instruct_dare_ties-density-0.9 #high MuSR score
merge_method: sce
base_model: DreadPoor/Aspire-8B-model_stock #reference baseline, the output is a modified version of it
parameters:
select_topk: 0.35 #for each model, the top 35% of diferences relative to the baseline are considered. can be detrimental.
dtype: bfloat16
int8_mask: true