metadata
base_model:
- nbeerbower/mistral-nemo-gutenberg3-12B
- Spestly/Ava-1.5-12B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Quant: https://huggingface.co/Triangle104/Minerva-10b-Q4_K_M-GGUF
Feedback is welcome. Give a like if useful.
Credit to nbeerbower and Specstly for original models.
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: Spestly/Ava-1.5-12B
layer_range: [0, 32]
- model: nbeerbower/mistral-nemo-gutenberg3-12B
layer_range: [0, 32]
merge_method: slerp
base_model: Spestly/Ava-1.5-12B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16