Text-to-Image
Diffusers
lora
stable-diffusion
flash-sd / README.md
clementchadebec's picture
Update README.md
4c52101 verified
|
raw
history blame
2.82 kB
metadata
license: cc-by-nc-4.0
library_name: diffusers
base_model: runwayml/stable-diffusion-v1-5
tags:
  - lora
  - text-to-image
  - diffusers
  - stable-diffusion
inference: false

⚡ Flash Diffusion: FlashSD ⚡

Flash Diffusion is a diffusion distillation method proposed in Flash Diffusion: Accelerating Any Conditional Diffusion Model for Few Steps Image Generation by Clément Chadebec, Onur Tasar, Eyal Benaroche, and Benjamin Aubin from Jasper Research. This model is a 26.4M LoRA distilled version of SD1.5 model that is able to generate images in 2-4 steps. The main purpose of this model is to reproduce the main results of the paper. See our live demo and official Github repo.

How to use?

The model can be used using the StableDiffusionPipeline from diffusers library directly. It can allow reducing the number of required sampling steps to 2-4 steps.

from diffusers import StableDiffusionPipeline, LCMScheduler

adapter_id = "jasperai/flash-sd"

pipe = StableDiffusionPipeline.from_pretrained(
  "runwayml/stable-diffusion-v1-5",
  use_safetensors=True,
)

pipe.scheduler = LCMScheduler.from_pretrained(
  "runwayml/stable-diffusion-v1-5",
  subfolder="scheduler",
  timestep_spacing="trailing",
)
pipe.to("cuda")

# Fuse and load LoRA weights
pipe.load_lora_weights(adapter_id)
pipe.fuse_lora()

prompt = "A raccoon reading a book in a lush forest."

image = pipe(prompt, num_inference_steps=4, guidance_scale=0).images[0]

Training Details

The model was trained for 20k iterations on 2 H100 GPUs (representing approx. a total 26 GPU hours of training). Please refer to the paper for further parameters details.

Metrics on COCO 2017 validation set (Table 1)

  • FID-5k: 22.6 (2 NFE) / 22.5 (4 NFE)
  • CLIP Score (ViT-g/14): 0.306 (2 NFE) / 0.311 (4 NFE)

Metrics on COCO 2014 validation (Table 2)

  • FID-30k: 12.41 (4 NFE)
  • FID-30k: 12.27 (2 NFE)

Citation

If you find this work useful or use it in your research, please consider citing us

@misc{chadebec2024flash,
      title={Flash Diffusion: Accelerating Any Conditional Diffusion Model for Few Steps Image Generation}, 
      author={Clement Chadebec and Onur Tasar and Eyal Benaroche and Benjamin Aubin},
      year={2024},
      eprint={2406.02347},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

License

This model is released under the the Creative Commons BY-NC license.