stable-diffusion / README.md
patrickvonplaten's picture
Update README.md
934e26b
|
raw
history blame
No virus
4 kB
metadata
license: other
tags:
  - stable-diffusion
  - text-to-image
inference: false

Stable Diffusion

Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. This model card gives an overview of all available model checkpoints. For more in-detail model cards, please have a look at the model repositories listed under Model Access.

Stable Diffusion Version 1

For the first version 4 model checkpoints are released. Higher versions have been trained for longer and are thus usually better in terms of image generation quality then lower versions. More specifically:

Each checkpoint can be used both with 🤗's diffusers library or the original Stable Diffusion GitHub repository. Note that you have to "click-request" them on each respective model repository.

Citation

    @InProceedings{Rombach_2022_CVPR,
        author    = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
        title     = {High-Resolution Image Synthesis With Latent Diffusion Models},
        booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
        month     = {June},
        year      = {2022},
        pages     = {10684-10695}
    }

This model card was written by: Robin Rombach and Patrick Esser and is based on the DALL-E Mini model card.