CorticalStack's picture
Update README.md
ae6673e verified
|
raw
history blame
1.43 kB
---
license: apache-2.0
tags:
- merge
- mergekit
- bardsai/jaskier-7b-dpo-v5.6
- mlabonne/NeuralDaredevil-7B
- Gille/StrangeMerges_21-7B-slerp
- CultriX/NeuralTrix-7B-dpo
---
<img src="pikus-pikantny.png" alt="Pikus pikantny logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# pikus-pikantny-7B-dare
pikus-pikantny-7B-dare is a DARE merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6)
* [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B)
* [Gille/StrangeMerges_21-7B-slerp](https://huggingface.co/Gille/StrangeMerges_21-7B-slerp)
* [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo)
See the paper [Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch](https://arxiv.org/abs/2311.03099) for more on the method.
## 🧩 Configuration
```yaml
models:
- model: bardsai/jaskier-7b-dpo-v5.6
- model: mlabonne/NeuralDaredevil-7B
parameters:
density: 0.53
weight: 0.3
- model: Gille/StrangeMerges_21-7B-slerp
parameters:
density: 0.53
weight: 0.4
- model: CultriX/NeuralTrix-7B-dpo
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: bardsai/jaskier-7b-dpo-v5.6
parameters:
int8_mask: true
dtype: bfloat16
```