Converted with ComfyUI-GGUF/Tools

shoutout ostris, lodestone, city96, and others for being inspiring individuals

Want other GGUF quantizations? Check out hum-ma's repo! (Q8_0, Q6_K, Q5_K_M, Q5_0, Q4_0, Q3_K_S, Q3_K_M)

Downloads last month
302
GGUF
Model size
8.16B params
Architecture
flux

4-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for Clybius/Flex.1-alpha-Q4_K_M-GGUF

Quantized
(2)
this model