File size: 1,392 Bytes
b7e716e bf974c0 6585c99 b7e716e 383fc18 b7e716e 8507be5 b7e716e bf974c0 b7e716e bf974c0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
base_model:
- NousResearch/Nous-Hermes-2-Mistral-7B-DPO
- cognitivecomputations/samantha-1.1-westlake-7b-laser
library_name: transformers
tags:
- mergekit
- merge
- mistral
- nous
- westlake
- samantha
license: cc
---
Quants by @mradermacher: https://huggingface.co/mradermacher/Cypher-7B-GGUF
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: "NousResearch/Nous-Hermes-2-Mistral-7B-DPO"
layer_range: [0, 32]
- model: "cognitivecomputations/samantha-1.1-westlake-7b-laser"
layer_range: [0, 32]
merge_method: slerp
base_model: "NousResearch/Nous-Hermes-2-Mistral-7B-DPO"
parameters:
t:
- filter: lm_head
value: [0.55]
- filter: embed_tokens
value: [0.7]
- filter: self_attn
value: [0.65, 0.35]
- filter: mlp
value: [0.35, 0.65]
- filter: layernorm
value: [0.4, 0.6]
- filter: modelnorm
value: [0.6]
- value: 0.5
dtype: bfloat16
```
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO)
* [cognitivecomputations/samantha-1.1-westlake-7b-laser](https://huggingface.co/cognitivecomputations/samantha-1.1-westlake-7b-laser) |