Emerald-Wyvern-12B was made with a custom method, it merges — Luminous-Shadow-12B, Velvet-Orchid-12B, and Crimson-Twilight-12B, using Harmony-Bird-12B as a base.
Emerald Wyvern
Overview
Merge Configuration
Expand YAML Configuration
models:
- model: Vortex5/Luminous-Shadow-12B
parameters:
weight:
- filter: self_attn
value: [0.20, 0.35, 0.55, 0.75, 1.00, 0.90, 0.25, 0.25]
- value: 0.33
- model: Vortex5/Velvet-Orchid-12B
parameters:
weight:
- filter: mlp
value: [0.20, 0.30, 0.20, 0.7, 0.65, 0.66, 0.0, 0.30]
- value: 0.33
- model: Vortex5/Crimson-Twilight-12B
parameters:
weight:
- filter: self_attn
value: [0.25, 0.35, 0.45, 0.55, 0.60, 0.65, 0.88, 0.88]
- filter: mlp
value: [0.20, 0.30, 0.45, 0.65, 0.80, 0.90, 0.95, 0.85]
- value: 0.33
merge_method: flowsync
base_model: Vortex5/Harmony-Bird-12B
dtype: bfloat16
parameters:
strength: 1.0
balance: 0.5
tokenizer:
source: Vortex5/Harmony-Bird-12B
Custom Merge Method
About
FlowSync
Concept: A coherence-weighted tensor merge algorithm that aligns donor model deltas relative to a shared base, computes pairwise cosine similarities to measure inter-model agreement, and aggregates them along the highest-consensus direction in parameter space. It reinforces semantically consistent weight shifts while suppressing incoherent or noisy deviations using adaptive temperature scaling, semantic gating (Nyström eigenvector extraction), and MAD-based normalization.
Key Parameters
strength(global): Scales how far the merged tensor moves from the base toward the consensus vector. Higher values emphasize donor influence.balance(global): Modulates bias between logic-dominant (attention/MLP) and style-dominant (embedding/output) layers.
Credits
- Team Mradermacher — Static & imatrix quants
- DeathGodlike — EXL3 quants
- Original creators and model authors
- Downloads last month
- 14
Model tree for Vortex5/Emerald-Wyvern-12B
Merge model
this model