---
language:
- en
pipeline_tag: text-generation
tags:
- text-generation-inference
- instruct
- conversational
- roleplay
license: cc-by-4.0
---
This repo contains GGUF-IQ-Imatrix quantized model files for [Erosumika-7B-v3-0.2](https://huggingface.co/localfultonextractor/Erosumika-7B-v3-0.2).
Quants to be uploaded:
"Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
"Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/L2l7DXKyLTD6MuL16JPn1.png)
## Model information:
Erosumika-7B-v3-0.2
~Mistral 0.2 Edition~
## Model Details
The Mistral 0.2 version of Erosumika-7B-v3, a DARE TIES merge between Nitral's [Kunocchini-7b](https://huggingface.co/Nitral-AI/Kunocchini-7b), Endevor's [InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) and my [FlatErosAlpha](https://huggingface.co/localfultonextractor/FlatErosAlpha), a flattened(in order to keep the vocab size 32000) version of tavtav's [eros-7B-ALPHA](https://huggingface.co/tavtav/eros-7B-ALPHA). Alpaca and ChatML work best. Slightly smarter and better prompt comprehension than Mistral 0.1 Erosumika-7B-v3. 32k context should work.
[GGUF quants](https://huggingface.co/localfultonextractor/Erosumika-7B-v3-0.2-GGUF)
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
```yaml
merge_method: task_arithmetic
base_model: alpindale/Mistral-7B-v0.2-hf
models:
- model: localfultonextractor/Erosumika-7B-v3
parameters:
weight: 1.0
dtype: float16
```