Kaoeiri's picture
Update README.md
96fe0a7 verified
---
base_model:
- Kaoeiri/Qwenwify-32B-v3
- allura-org/Qwen2.5-32b-RP-Ink
- Dans-DiscountModels/Qwen2.5-32B-ChatML
- Qwen/QwQ-32B-Preview
- Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B
- OpenBuddy/openbuddy-qwq-32b-v24.2-200k
- Sao10K/32B-Qwen2.5-Kunou-v1
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
license: cc-by-nc-nd-4.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) as a base.
### Models Merged
The following models were included in the merge:
* [Kaoeiri/Qwenwify-32B-v3](https://huggingface.co/Kaoeiri/Qwenwify-32B-v3)
* [allura-org/Qwen2.5-32b-RP-Ink](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink)
* [Dans-DiscountModels/Qwen2.5-32B-ChatML](https://huggingface.co/Dans-DiscountModels/Qwen2.5-32B-ChatML)
* [Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B](https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B)
* [OpenBuddy/openbuddy-qwq-32b-v24.2-200k](https://huggingface.co/OpenBuddy/openbuddy-qwq-32b-v24.2-200k)
* [Sao10K/32B-Qwen2.5-Kunou-v1](https://huggingface.co/Sao10K/32B-Qwen2.5-Kunou-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Kaoeiri/Qwenwify-32B-v3 # backbone model, used for fusion
parameters:
weight: 1.0
density: 0.92
- model: Sao10K/32B-Qwen2.5-Kunou-v1 # RP and synthetic storywriting
parameters:
weight: 0.30
density: 0.75
- model: Dans-DiscountModels/Qwen2.5-32B-ChatML # logic and chatting focus
parameters:
weight: 0.15
density: 0.85
- model: OpenBuddy/openbuddy-qwq-32b-v24.2-200k # Chinese-heavy datasets, raw, diverse
parameters:
weight: 0.25
density: 0.88
- model: Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B # Japanese-focused, base model
parameters:
weight: 0.20
density: 0.82
- model: allura-org/Qwen2.5-32b-RP-Ink # RP-focused, unique character traits
parameters:
weight: 0.28
density: 0.78
merge_method: dare_ties
base_model: Qwen/QwQ-32B-Preview
parameters:
density: 0.90
epsilon: 0.05
lambda: 1.35
random_seed: 42
dtype: bfloat16
tokenizer_source: union
```