My alternate quantizations.
#16
by
ZeroWw
- opened
These are my own quantizations (updated almost daily).
You can find them here: https://huggingface.co/ZeroWw/aya-23-8B-GGUF
They are not the usual quants. The output and embed tensors are kept at f16 while the other tensors are quantized at q5,q6 and q8.
A smaller size and almost no degradation even at q5_k.