Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

image/png

GGUF of nocudaexe/Neural-Dark-Waifu-v0.2 Tested to 15872 tokens! I figured out quantization, Q4 and Q5 now available!

The first attempt at Neural-Dark-Waifu started showing strange behaviour at 4-8k context, so I re-merged, and did more testing, this one is exhibiting the correct behaviours An attempt to merge mlabonne/AlphaMonarch-7B excellent chat ability, with a bunch of ERP thrown at it to bring down it's guardrails, as well as Test157t/Kunocchini-7b-128k-test to increase context


license: apache-2.0 tags: - merge - mergekit - lazymergekit - nocudaexe/Dark-Waifu-7b - nocudaexe/Infinite-Waifu

DarkNeural

DarkNeural is a merge of the following models using mergekit:

🧩 Configuration

```yamlmodels: # No parameters necessary for base model

  • model: nocudaexe/Dark-Waifu-7b parameters: density: 0.33 weight: 0.4
  • model: nocudaexe/Infinite-Waifu parameters: density: 0.38 weight: 0.3 merge_method: Slerp

parameters: int8_mask: true dtype: bfloat16 ```

Downloads last month
103
GGUF
Model size
7.24B params
Architecture
llama

4-bit

5-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .