license: creativeml-openrail-m
tags:
- keras
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- keras-dreambooth
- nature
inference: true
widget:
- text: a photo of puggieace dog on the beach, sunset in background
datasets:
- nielsgl/dreambooth-ace
library_name: keras
pipeline_tag: text-to-image
emoji: 🐶
KerasCV Stable Diffusion in Diffusers 🧨🤗
DreamBooth model for the puggieace
concept trained by nielsgl on the nielsgl/dreambooth-ace
dataset.
It can be used by modifying the instance_prompt
: a photo of puggieace.
The examples are from 2 different Keras CV models (StableDiffusion
and StableDiffusionV2
, corresponding to Stable Diffusion V1.4 and V2.1, respectively) trained on the same dataset (nielsgl/dreambooth-ace
).
Description
The Stable Diffusion V2 pipeline contained in the corresponding repository (nielsgl/dreambooth-keras-pug-ace-sd2.1
) was created using a modified version of this Space for StableDiffusionV2 from KerasCV. The purpose is to convert the KerasCV Stable Diffusion weights in a way that is compatible with Diffusers. This allows users to fine-tune using KerasCV and use the fine-tuned weights in Diffusers taking advantage of its nifty features (like schedulers, fast attention, etc.).
This model was created as part of the Keras DreamBooth Sprint 🔥. Visit the organisation page for instructions on how to take part!
Examples
Stable Diffusion V1.4
Portrait of puggieace dog as a Roman Emperor, city in background
Photo of puggieace dog wearing sunglasses on the beach, sunset in background, golden hour
Photo of cute puggieace dog as an astronaut, planet and spaceship in background
Stable Diffusion V2.1
Portrait painting of a cute puggieace dog as a samurai
Photo of cute puggieace dog as an astronaut, space and planet in background
A photo of a cute puggieace dog getting a haircut in a barbershop
Portrait photo of puggieace dog in New York
Portrait of puggieace dog as a Roman Emperor, city in background
Usage with Stable Diffusion V1.4
from huggingface_hub import from_pretrained_keras
import keras_cv
import matplotlib.pyplot as plt
model = keras_cv.models.StableDiffusion(img_width=512, img_height=512, jit_compile=True)
model._diffusion_model = from_pretrained_keras("nielsgl/dreambooth-pug-ace")
model._text_encoder = from_pretrained_keras("nielsgl/dreambooth-pug-ace-text-encoder")
images = model.text_to_image("a photo of puggieace dog on the beach", batch_size=3)
plt.imshow(image[0])
Usage with Stable Diffusion V2.1
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('nielsgl/dreambooth-keras-pug-ace-sd2.1')
image = pipeline().images[0]
image
Training hyperparameters
The following hyperparameters were used during training for Stable Diffusion v1.4:
Hyperparameters | Value |
---|---|
name | RMSprop |
weight_decay | None |
clipnorm | None |
global_clipnorm | None |
clipvalue | None |
use_ema | False |
ema_momentum | 0.99 |
ema_overwrite_frequency | 100 |
jit_compile | True |
is_legacy_optimizer | False |
learning_rate | 0.0010000000474974513 |
rho | 0.9 |
momentum | 0.0 |
epsilon | 1e-07 |
centered | False |
training_precision | float32 |