CursedMatrix-8B-v9 GGUF Quantizations π²
The long journey from despair to acceptable perfection.
This model was converted to GGUF format using llama.cpp.
For more information of the model, see the original model card: Khetterman/CursedMatrix-8B-v9.
Available Quantizations (ββΏβ)
Type | Quantized GGUF Model | Size |
---|---|---|
Q4_0 | Khetterman/CursedMatrix-8B-v9-Q4_0.gguf | 4.34 GiB |
Q6_K | Khetterman/CursedMatrix-8B-v9-Q6_K.gguf | 6.14 GiB |
Q8_0 | Khetterman/CursedMatrix-8B-v9-Q8_0.gguf | 7.95 GiB |
My thanks to the authors of the original models, your work is incredible. Have a good time π€
- Downloads last month
- 9
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Khetterman/CursedMatrix-8B-v9-GGUF
Base model
Khetterman/CursedMatrix-8B-v9