CorticalStack's picture
Update README.md
c6ffbd9 verified
---
license: apache-2.0
tags:
- merge
- mergekit
- CorticalStack/pastiche-crown-clown-7b-dare-dpo
- CultriX/NeuralTrix-7B-dpo
- CorticalStack/neurotic-crown-clown-7b-ties
---
<img src="shadow_clown.png" alt="Shadow clown logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# shadow-clown-7B-dare
shadow-clown-7B-dare is a DARE merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [CorticalStack/pastiche-crown-clown-7b-dare-dpo](https://huggingface.co/CorticalStack/pastiche-crown-clown-7b-dare-dpo)
* [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo)
* [CorticalStack/neurotic-crown-clown-7b-ties](https://huggingface.co/CorticalStack/neurotic-crown-clown-7b-ties)
See the paper [Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch](https://arxiv.org/abs/2311.03099) for more on the method.
## 🧩 Configuration
```yaml
models:
- model: yam-peleg/Experiment26-7B
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
parameters:
density: 0.52
weight: 0.4
- model: CultriX/NeuralTrix-7B-dpo
parameters:
density: 0.52
weight: 0.2
- model: CorticalStack/neurotic-crown-clown-7b-ties
parameters:
density: 0.52
weight: 0.3
merge_method: dare_ties
base_model: yam-peleg/Experiment26-7B
parameters:
int8_mask: true
dtype: bfloat16
```