CursedMatrix-8B-v9 GGUF Quantizations πŸ—²

The long journey from despair to acceptable perfection.

CursedMatrixLogo256.png

This model was converted to GGUF format using llama.cpp.

For more information of the model, see the original model card: Khetterman/CursedMatrix-8B-v9.

Available Quantizations (β—•β€Ώβ—•)

My thanks to the authors of the original models, your work is incredible. Have a good time πŸ–€

Downloads last month
9
GGUF
Model size
8.03B params
Architecture
llama

4-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Khetterman/CursedMatrix-8B-v9-GGUF

Quantized
(5)
this model