Stable Diffusion XL
Stable Diffusion XL (SDXL) was proposed in SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach.
The abstract from the paper is:
We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators.
Tips
- Using SDXL with a DPM++ scheduler for less than 50 steps is known to produce visual artifacts because the solver becomes numerically unstable. To fix this issue, take a look at this PR which recommends for ODE/SDE solvers:
- set
use_karras_sigmas=True
orlu_lambdas=True
to improve image quality - set
euler_at_final=True
if you're using a solver with uniform step sizes (DPM++2M or DPM++2M SDE)
- set
- Most SDXL checkpoints work best with an image size of 1024x1024. Image sizes of 768x768 and 512x512 are also supported, but the results aren't as good. Anything below 512x512 is not recommended and likely won't be for default checkpoints like stabilityai/stable-diffusion-xl-base-1.0.
- SDXL can pass a different prompt for each of the text encoders it was trained on. We can even pass different parts of the same prompt to the text encoders.
- SDXL output images can be improved by making use of a refiner model in an image-to-image setting.
- SDXL offers
negative_original_size
,negative_crops_coords_top_left
, andnegative_target_size
to negatively condition the model on image resolution and cropping parameters.
To learn how to use SDXL for various tasks, how to optimize performance, and other usage examples, take a look at the Stable Diffusion XL guide.
Check out the Stability AI Hub organization for the official base and refiner model checkpoints!
StableDiffusionXLPipeline
[[autodoc]] StableDiffusionXLPipeline - all - call
StableDiffusionXLImg2ImgPipeline
[[autodoc]] StableDiffusionXLImg2ImgPipeline - all - call
StableDiffusionXLInpaintPipeline
[[autodoc]] StableDiffusionXLInpaintPipeline - all - call