--- base_model: [] library_name: transformers tags: - mergekit - merge --- # final_merge_evo This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./storage_evo/input_models/Mistral-7B-v0.1_8133861 as a base. ### Models Merged The following models were included in the merge: * ./storage_evo/input_models/WizardMath-7B-V1.1_2027605156 * ./storage_evo/input_models/shisa-gamma-7b-v1_4025154171 ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: ./storage_evo/input_models/Mistral-7B-v0.1_8133861 dtype: bfloat16 merge_method: dare_ties parameters: int8_mask: 1.0 normalize: 1.0 slices: - sources: - layer_range: [0, 32] model: ./storage_evo/input_models/shisa-gamma-7b-v1_4025154171 parameters: density: 1.0 weight: 0.011205095488974873 - layer_range: [0, 32] model: ./storage_evo/input_models/WizardMath-7B-V1.1_2027605156 parameters: density: 0.5651072788500664 weight: 0.7263645106101857 - layer_range: [0, 32] model: ./storage_evo/input_models/Mistral-7B-v0.1_8133861 ```