Bielik-11B-v2.2-Instruct-HQQ-4bit-128gs

This repo contains HQQ (4-bit, 128 group size) format model files for SpeakLeash's Bielik-11B-v.2.2-Instruct.

DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!

Model description:

Responsible for model quantization

  • Remigiusz KinasSpeakLeash - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery.

Contact Us

If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our Discord SpeakLeash.

Downloads last month
11
GGUF
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for speakleash/Bielik-11B-v2.2-Instruct-HQQ-4bit-128gs

Finetuned
(9)
this model

Collection including speakleash/Bielik-11B-v2.2-Instruct-HQQ-4bit-128gs