merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using unsloth/Mistral-Small-Instruct-2409 as a base.
Models Merged
The following models were included in the merge:
- hf-100/Mistral-Small-Spellbound-StoryWriter-22B-instruct-0.2-chkpt-200-16-bit
- Gryphe/Pantheon-RP-1.6.2-22b-Small
- TroyDoesAI/BlackSheep-Mistral-22B
- TheDrummer/Cydonia-22B-v1.3
- byroneverson/Mistral-Small-Instruct-2409-abliterated + Alfitaria/mistral-small-fujin-qlora
- Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
- ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
- concedo/Beepo-22B + rAIfle/Acolyte-LORA
- TheDrummer/Cydonia-22B-v1.1
- Darkknight535/MS-Moonlight-22B-v3
- concedo/Beepo-22B + ToastyPigeon/mistral-small-springdragon-qlora
- InferenceIllusionist/SorcererLM-22B
- DigitalSouls/BlackSheep-DigitalSoul-22B
- TheDrummer/Cydonia-22B-v1.2
- crestf411/MS-sunfall-v0.7.0
- anthracite-org/magnum-v4-22b
- concedo/Beepo-22B + Kaoeiri/Moingooistrial-22B-V1-Lora
Configuration
The following YAML configuration was used to produce this model:
models:
# Primary RP and Interaction Models
- model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
parameters:
weight: 0.24
density: 0.73
- model: TroyDoesAI/BlackSheep-Mistral-22B
parameters:
weight: 0.25
density: 0.72
- model: concedo/Beepo-22B+Kaoeiri/Moingooistrial-22B-V1-Lora
parameters:
weight: 0.30
density: 0.74
# Role-Playing Models (Low priority but important for RP)
- model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
parameters:
weight: 0.13
density: 0.64
# Versatile, High-Performance Models
- model: anthracite-org/magnum-v4-22b
parameters:
weight: 1.0
density: 0.86
- model: DigitalSouls/BlackSheep-DigitalSoul-22B
parameters:
weight: 0.19
density: 0.68
- model: InferenceIllusionist/SorcererLM-22B
parameters:
weight: 0.20
density: 0.71
- model: concedo/Beepo-22B+rAIfle/Acolyte-LORA
parameters:
weight: 0.22
density: 0.69
# Supporting Models (Fine-tuned for specific tasks)
- model: TheDrummer/Cydonia-22B-v1.3
parameters:
weight: 0.22
density: 0.68
- model: TheDrummer/Cydonia-22B-v1.2
parameters:
weight: 0.13
density: 0.66
- model: TheDrummer/Cydonia-22B-v1.1
parameters:
weight: 0.15
density: 0.65
- model: crestf411/MS-sunfall-v0.7.0
parameters:
weight: 0.21
density: 0.7
- model: byroneverson/Mistral-Small-Instruct-2409-abliterated+Alfitaria/mistral-small-fujin-qlora
parameters:
weight: 0.16
density: 0.68
- model: concedo/Beepo-22B+ToastyPigeon/mistral-small-springdragon-qlora
parameters:
weight: 0.17
density: 0.71
- model: hf-100/Mistral-Small-Spellbound-StoryWriter-22B-instruct-0.2-chkpt-200-16-bit
parameters:
weight: 0.2
density: 0.68
# Additional RP-focused Models (Lower Priority)
- model: Gryphe/Pantheon-RP-1.6.2-22b-Small
parameters:
weight: 0.14
density: 0.66
# Secondary RP-focused Models
- model: Darkknight535/MS-Moonlight-22B-v3
parameters:
weight: 0.12
density: 0.63
merge_method: dare_ties
base_model: unsloth/Mistral-Small-Instruct-2409
parameters:
density: 0.84
epsilon: 0.07
lambda: 1.23
dtype: bfloat16
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Kaoeiri/MS-Magpantheonsel-lark-v4x1.6.2RP-Cydonia-vXXX-22B-10.11
Merge model
this model