|
--- |
|
license: cc-by-nc-nd-4.0 |
|
tags: |
|
- not-for-all-audiences |
|
--- |
|
|
|
# merge |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Mistral-Small-Instruct-2409](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409) as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [TheDrummer/Cydonia-22B-v1.2](https://huggingface.co/TheDrummer/Cydonia-22B-v1.2) |
|
* [TheDrummer/Cydonia-22B-v1.3](https://huggingface.co/TheDrummer/Cydonia-22B-v1.3) |
|
* [unsloth/Mistral-Small-Instruct-2409](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409) + [rAIfle/Acolyte-LORA](https://huggingface.co/rAIfle/Acolyte-LORA) |
|
* [Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small](https://huggingface.co/Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small) |
|
* [InferenceIllusionist/SorcererLM-22B](https://huggingface.co/InferenceIllusionist/SorcererLM-22B) |
|
* [allura-org/MS-Meadowlark-22B](https://huggingface.co/allura-org/MS-Meadowlark-22B) |
|
* [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b) |
|
* [TheDrummer/Cydonia-22B-v1.1](https://huggingface.co/TheDrummer/Cydonia-22B-v1.1) |
|
* [crestf411/MS-sunfall-v0.7.0](https://huggingface.co/crestf411/MS-sunfall-v0.7.0) |
|
* [spow12/ChatWaifu_v2.0_22B](https://huggingface.co/spow12/ChatWaifu_v2.0_22B) |
|
* [unsloth/Mistral-Small-Instruct-2409](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409) + [Kaoeiri/Moingooistrial-22B-V1-Lora](https://huggingface.co/Kaoeiri/Moingooistrial-22B-V1-Lora) |
|
* [ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1) |
|
* [Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B](https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: anthracite-org/magnum-v4-22b |
|
parameters: |
|
weight: 1.0 # Primary model for human-like writing |
|
density: 0.88 # Solid foundation for clear, balanced text generation |
|
- model: TheDrummer/Cydonia-22B-v1.3 |
|
parameters: |
|
weight: 0.26 # Slightly reduced weight for creativity |
|
density: 0.7 # Matches revised influence for subtle creativity |
|
- model: TheDrummer/Cydonia-22B-v1.2 |
|
parameters: |
|
weight: 0.16 # Reduced weight to dial back creativity overlap |
|
density: 0.68 # Harmonized with the roleplay model reductions |
|
- model: TheDrummer/Cydonia-22B-v1.1 |
|
parameters: |
|
weight: 0.18 # Further reduced to minimize intrusive elements |
|
density: 0.68 # Balanced density for roleplay accuracy |
|
- model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small |
|
parameters: |
|
weight: 0.28 # Reduced for less dominance of storytelling tropes |
|
density: 0.77 # Adjusted density for smoother integration |
|
- model: allura-org/MS-Meadowlark-22B |
|
parameters: |
|
weight: 0.3 # Retained for its balanced creativity |
|
density: 0.72 # Supports descriptive fluency and accuracy |
|
- model: spow12/ChatWaifu_v2.0_22B |
|
parameters: |
|
weight: 0.27 # Intact to retain anime-style RP nuance |
|
density: 0.7 # Unmodified for balance with other models |
|
- model: Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B |
|
parameters: |
|
weight: 0.2 # Slight reduction to balance Japanese context influence |
|
density: 0.58 # Fine-tuned to support overall coherence |
|
- model: crestf411/MS-sunfall-v0.7.0 |
|
parameters: |
|
weight: 0.25 # Reduced weight for a subtler dramatic tone |
|
density: 0.74 # Balanced density for smoother blending |
|
- model: unsloth/Mistral-Small-Instruct-2409+rAIfle/Acolyte-LORA |
|
parameters: |
|
weight: 0.24 # Slight reduction for subtler varied content inputs |
|
density: 0.7 # Aligned density for balanced integration |
|
- model: InferenceIllusionist/SorcererLM-22B |
|
parameters: |
|
weight: 0.23 # Reduced for a more cohesive stylistic approach |
|
density: 0.74 # Matches weight reduction for smoother outputs |
|
- model: unsloth/Mistral-Small-Instruct-2409+Kaoeiri/Moingooistrial-22B-V1-Lora |
|
parameters: |
|
weight: 0.26 # Slightly dialed back for monster and mythical content |
|
density: 0.72 # Balanced for seamless integration |
|
- model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 |
|
parameters: |
|
weight: 0.12 # Light touch to prevent overheating |
|
density: 0.65 # Low density to avoid conflict with roleplay-heavy models |
|
|
|
merge_method: dare_ties # Optimal for diverse and complex model blending |
|
base_model: unsloth/Mistral-Small-Instruct-2409 |
|
parameters: |
|
density: 0.85 # Retained for logical and creative balance |
|
epsilon: 0.09 # Small step size for smooth blending |
|
lambda: 1.22 # Slightly adjusted scaling for refined sharpness |
|
dtype: bfloat16 |
|
|
|
``` |