Edit model card

This is a direct GGUF conversion of black-forest-labs/FLUX.1-dev

As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.

The model files can be used with the ComfyUI-GGUF custom node.

Place model files in ComfyUI/models/unet - see the GitHub readme for further install instructions.

Please refer to this chart for a basic overview of quantization types.

Downloads last month
124,304
GGUF
Model size
11.9B params
Architecture
flux

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Examples
Inference API (serverless) does not yet support gguf models for this pipeline type.

Model tree for city96/FLUX.1-dev-gguf

Quantized
(17)
this model
Adapters
2 models

Spaces using city96/FLUX.1-dev-gguf 6

Collection including city96/FLUX.1-dev-gguf