sd3.5-medium-gguf / README.md
calcuis's picture
Update README.md
e80ad73 verified
|
raw
history blame
2.63 kB
metadata
license: other
license_name: stabilityai-ai-community
license_link: LICENSE
language:
  - en
base_model:
  - stabilityai/stable-diffusion-3.5-medium
pipeline_tag: text-to-image
tags:
  - stable-diffusion
  - gguf-comfy

GGUF quantized version of Stable Diffusion 3.5 Medium

screenshot

Setup (once)

  • drag sd3.5_medium-q5_0.gguf (2.02GB) to > ./ComfyUI/models/unet
  • drag clip_g.safetensors (1.39GB) to > ./ComfyUI/models/clip
  • drag clip_l.safetensors (246MB) to > ./ComfyUI/models/clip
  • drag t5xxl_fp8_e4m3fn.safetensors (4.89GB) to > ./ComfyUI/models/clip
  • drag diffusion_pytorch_model.safetensors (168MB) to > ./ComfyUI/models/vae

Run it straight (no installation needed way)

  • run the .bat file in the main directory (assuming you are using the gguf-comfy pack below)
  • drag the workflow json file (see below) to > your browser
  • generate your first picture with sd3, awesome!

Workflows

  • example workflow for gguf (if it doesn't work, upgrade your pack: ggc y) 👻
  • example workflow for the original safetensors 🎃

Bug reports (or brief review)

  • t/q1_0 and t/q2_0; doesn't work (invalid GGMLQ type error)
  • q2_k is super fast; but might be good for medical research or abstract stylist
  • q3 family is fast; finger issue can be easily detected but picture quality is interestingly good
  • notice the same file size in 0 and s models; still keep them all here; see who can deal with it
  • q4 and above should be no problem for general to high quality production

Upper tier sets

References