This repository hosts GGUF-IQ-Imatrix quantizations for Virt-io/FuseChat-Kunoichi-10.7B.

Uploaded:

    quantization_options = [
        "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", 
        "Q5_K_S", "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XS", "IQ3_XXS"
    ]

image/png

Downloads last month
161
GGUF
Model size
10.7B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including Lewdiculous/FuseChat-Kunoichi-10.7B-GGUF-IQ-Imatrix