I use sd_turbo to test sd2 for sd.cpp , so here they are.

The "old" q8_0 is a direct conversion, converting to f16 first, then to q8_0 gave an equivalent performing but smaller filesize model file.

Use --cfg-scale 1 --steps 8 and maybe --schedule karras.

The model only really produces ok output for 512x512 .

Downloads last month
72
GGUF
Model size
1.3B params
Architecture
undefined

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for Green-Sky/SD-Turbo-GGUF

Quantized
(3)
this model