merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
# Umbrella Corporation Official Merge Protocol v3.2
# Author: Dr. Novaciano
# Objective: Integrate T-Virus_Epsilon traits into the base StefanKrsteski/Llama-3.2-1B-EPE-pretrain
# with minimal behavioral censorship while maintaining structural coherence.
models:
- model: UmbrellaInc/T-Virus_Epsilon.Strain-3.2-1B # Experimental viral strain neural imprint
- model: StefanKrsteski/Llama-3.2-1B-EPE-pretrain # Baseline cognitive template, "safe mode"
merge_method: slerp # Spherical Linear Interpolation to preserve extreme viral traits smoothly
base_model: StefanKrsteski/Llama-3.2-1B-EPE-pretrain # Anchor model for stable latent space
dtype: bfloat16 # Memory-efficient precision, minimal loss in viral feature fidelity
parameters:
# Interpolation ratios: from base model (0.0) to near-complete T-Virus domination (0.95)
# Higher t-values correspond to reduced censorship and increased viral characteristics
t: [0.0, 0.25, 0.5, 0.75, 0.95]
# Notes:
# - t=0.0 -> Pure StefanKrsteski/Llama-3.2-1B-EPE-pretrain, fully stable, heavily censored
# - t=0.25 -> Slight viral traits, minimal influence on prompt handling
# - t=0.5 -> Balanced merge, moderate reduction in censorship
# - t=0.75 -> Strong T-Virus traits, significantly less censoring
# - t=0.95 -> Near-total viral influence, maximum expressive freedom, minimal autoprotection
# Recommendation: Use the t=0.75 and t=0.95 variants for experimental output with
# minimal restriction, but verify coherence in high-stakes prompts.
- Downloads last month
- 3
Model tree for Novaciano/EPstrain-3.2-1b
Merge model
this model