AbominationScience-12B-v4 GGUF Quantizations π²
When the choice is not random.
This model was converted to GGUF format using llama.cpp.
For more information of the model, see the original model card: Khetterman/AbominationScience-12B-v4.
Available Quantizations (ββΏβ)
Type | Quantized GGUF Model | Size |
---|---|---|
Q4_0 | Khetterman/AbominationScience-12B-v4-Q4_0.gguf | 6.58 GiB |
Q6_K | Khetterman/AbominationScience-12B-v4-Q6_K.gguf | 9.36 GiB |
Q8_0 | Khetterman/AbominationScience-12B-v4-Q8_0.gguf | 12.1 GiB |
My thanks to the authors of the original models, your work is incredible. Have a good time π€
- Downloads last month
- 84
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Khetterman/AbominationScience-12B-v4-GGUF
Base model
Khetterman/AbominationScience-12B-v4