license: mit
prior:
- warp-diffusion/wuerstchen-prior
tags:
- text-to-image
- wuerstchen
Würstchen - Overview
Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce computational costs for both training and inference by magnitudes. Training on 1024x1024 images, is way more expensive than training at 32x32. Usually, other works make use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the paper). A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, allowing also cheaper and faster inference.
Würstchen - Decoder
The Decoder is what we refer to as "Stage A" and "Stage B". The decoder takes in image embeddings, either generated by the Prior (Stage C) or extracted from a real image, and decodes those latents back into the pixel space. Specifically, Stage B first decodes the image embeddings into the VQGAN Space, and Stage A (which is a VQGAN) decodes the latents into pixel space. Together, they achieve a spatial compression of 42.
Note: The reconstruction is lossy and loses information of the image. The current Stage B often lacks details in the reconstructions, which are especially noticeable to us humans when looking at faces, hands, etc. We are working on making these reconstructions even better in the future!
Image Sizes
Würstchen was trained on image resolutions between 1024x1024 & 1536x1536. We sometimes also observe good outputs at resolutions like 1024x2048. Feel free to try it out. We also observed that the Prior (Stage C) adapts extremely fast to new resolutions. So finetuning it at 2048x2048 should be computationally cheap.
How to run
This pipeline should be run together with a prior https://huggingface.co/warp-ai/wuerstchen-prior:
import torch
from diffusers import AutoPipelineForText2Image
device = "cuda"
dtype = torch.float16
pipeline = AutoPipelineForText2Image.from_pretrained(
"warp-diffusion/wuerstchen", torch_dtype=dtype
).to(device)
caption = "Anthropomorphic cat dressed as a fire fighter"
output = pipeline(
prompt=caption,
height=1024,
width=1024,
prior_guidance_scale=4.0,
decoder_guidance_scale=0.0,
).images
Image Sampling Times
The figure shows the inference times (on an A100) for different batch sizes (num_images_per_prompt
) on Würstchen compared to Stable Diffusion XL (without refiner).
The left figure shows inference times (using torch > 2.0), whereas the right figure applies torch.compile
to both pipelines in advance.
Model Details
Developed by: Pablo Pernias, Dominic Rampas
Model type: Diffusion-based text-to-image generation model
Language(s): English
License: MIT
Model Description: This is a model that can be used to generate and modify images based on text prompts. It is a Diffusion model in the style of Stage C from the Würstchen paper that uses a fixed, pretrained text encoder (CLIP ViT-bigG/14).
Resources for more information: GitHub Repository, Paper.
Cite as:
@inproceedings{ pernias2024wrstchen, title={W\"urstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models}, author={Pablo Pernias and Dominic Rampas and Mats Leon Richter and Christopher Pal and Marc Aubreville}, booktitle={The Twelfth International Conference on Learning Representations}, year={2024}, url={https://openreview.net/forum?id=gU58d5QeGv} }
Environmental Impact
Würstchen v2 Estimated Emissions Based on that information, we estimate the following CO2 emissions using the Machine Learning Impact calculator presented in Lacoste et al. (2019). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- Hardware Type: A100 PCIe 40GB
- Hours used: 24602
- Cloud Provider: AWS
- Compute Region: US-east
- Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid): 2275.68 kg CO2 eq.