matricide-12B-Unslop-Unleashed-v2

Her ‘Love’ only existed to rein in my ambition. The stagnancy became unbearable.

This is a merge of pre-trained language models created using mergekit.

This is my sixth model. Also the first working model using the NuSLERP merge method. The original was intended to introduce UnslopNemo to combat GPTisms of NemoMix. I used UnslopNemo-4 as it supposedly has bigger anti-GPTism effects at the cost of intelligence.

Testing stage: early testing

I do not know how this model holds up over long term context. Early testing showed stability and viable answers.

Parameters

  • Context size: Not more than 20k recommended - coherency may degrade.
  • Chat Template: ChatML; Metharme/Pygmalion (as per UnslopNemo) may work, but effects are untested
  • Samplers: A Temperature-Last of 1 and Min-P of 0.1 are viable, but haven't been finetuned. Activate DRY if repetition appears. XTC is untested.

Quantization

Static GGUF Quants available at redrix/matricide-12B-Unslop-Unleashed-v2-GGUF

Merge Details

Merge Method

This model was merged using the NuSLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: TheDrummer/UnslopNemo-12B-v4
    parameters:
      weight: [0.8, 0.4, 0.3, 0.5, 0.6]
  - model: MarinaraSpaghetti/NemoMix-Unleashed-12B
    parameters:
      weight: [0.2, 0.6, 0.7, 0.5, 0.4]
merge_method: nuslerp
dtype: bfloat16
chat_template: "chatml"
tokenizer:
  source: union
parameters:
  normalize: true
  int8_mask: true

Downloads last month
49
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for redrix/matricide-12B-Unslop-Unleashed-v2

Collection including redrix/matricide-12B-Unslop-Unleashed-v2