Edit model card

This is a direct GGUF conversion of Freepik/flux.1-lite-8B-alpha

As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.

The model files can be used with the ComfyUI-GGUF custom node.

Place model files in ComfyUI/models/unet - see the GitHub readme for further install instructions.

Please refer to this chart for a basic overview of quantization types.

Downloads last month
6,182
GGUF
Model size
8.16B params
Architecture
flux

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Examples
Inference API (serverless) does not yet support gguf models for this pipeline type.

Model tree for city96/flux.1-lite-8B-alpha-gguf

Quantized
(1)
this model

Collection including city96/flux.1-lite-8B-alpha-gguf