---
tags:
- merge
- gguf
- not-for-all-audiences
- storywriting
- text adventure
---
# maid-yuzu-v8-alter-iMat-GGUF
Highly requested model. Quantized from fp16 with love. iMatrix file calculated from Q8 quant using an input file from [this discussion](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
All quants are verified working prior to uploading to repo for your safety and convenience
Original model card can be found [here](https://huggingface.co/rhplus0831/maid-yuzu-v8-alter)