license: creativeml-openrail-m
language:
- en
pipeline_tag: text-to-image
tags:
- art
Overview πβοΈ
This is a Diffusers-compatible version of Yiffymix v51 by chilon249. See the original page for more information.
Keep in mind that this is SDXL-Lightning checkpoint model, so using fewer steps (around 12 to 25) and low guidance scale (around 4 to 6) is recommended for the best result. It's also recommended to use clip skip of 2.
This repository uses DPM++ 2M Karras as its sampler (Diffusers only).
Diffusers Installation π§¨
Dependencies Installation π
First, you'll need to install few dependencies. This is a one-time operation, you only need to run the code once.
!pip install -q diffusers transformers accelerate
Model Installation πΏ
After the installation, you can run SDXL with this repository using the code below:
from diffusers import StableDiffusionXLPipeline
import torch
model = "IDK-ab0ut/Yiffymix_v51-XL"
pipeline = StableDiffusionXLPipeline.from_pretrained(
model, torch_dtype=torch.float16).to("cuda")
prompt = "a cat, detailed background, dynamic lighting"
negative_prompt = "low resolution, bad quality, deformed"
steps = 25
guidance_scale = 4
image = pipeline(prompt=prompt, negative_prompt=negative_prompt,
num_inference_steps=steps, guidance_scale=guidance_scale,
clip_skip=2).images[0]
image
Feel free to edit the image's configuration with your desire.
Scheduler's Customization βοΈ
γ €γ €γ €γ €π§¨For Diffusersπ§¨
You can see all available schedulers here.
To use scheduler other than DPM++ 2M Karras for this repository, make sure to import the corresponding pipeline for the scheduler you want to use. For example, we want to use Euler. First, import EulerDiscreteScheduler from Diffusers by adding this line of code.
from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler
Next step is to load the scheduler.
model = "IDK-ab0ut/Yiffymix_v51"
euler = EulerDiscreteScheduler.from_pretrained(
model, subfolder="scheduler")
pipeline = StableDiffusionXLPipeline.from_pretrained(
model, scheduler=euler, torch.dtype=torch.float16
).to("cuda")
Now you can generate any images using the scheduler you want.
Another example is using DPM++ 2M SDE Karras. We want to import DPMSolverMultistepScheduler from Diffusers first.
from diffusers import StableDiffusionXLPipeline, DPMSolverMultistepScheduler
Next, load the scheduler into the model.
model = "IDK-ab0ut/Yiffymix_v51"
dpmsolver = DPMSolverMultistepScheduler.from_pretrained(
model, subfolder="scheduler", use_karras_sigmas=True,
algorithm_type="sde-dpmsolver++").to("cuda")
# 'use_karras_sigmas' is called to make the scheduler
# use Karras sigmas during sampling.
pipeline = StableDiffusionXLPipeline.from_pretrained(
model, scheduler=dpmsolver, torch.dtype=torch.float16,
).to("cuda")
Variational Autoencoder (VAE) Installation πΌ
There are two ways to get Variational Autoencoder (VAE) file into the model. The first one is to download the file manually and the second one is to remotely download the file using code. In this repository, I'll explain the method of using code as the efficient way. First step is to download the VAE file. You can download the file manually or remotely, but I recommend you to use the remote one. Usually, VAE files are in .safetensors format. There are two websites you can visit to download VAE. Those are HuggingFace and CivitAI.
From HuggingFace π
This method is pretty straightforward. Pick any VAE's repository you like. Then, navigate to "Files" and the VAE's file. Make sure to click the file.
Copy the "Copy Download Link" for the file, you'll need this.
Next step is to load AutoencoderKL pipeline into the code.
from diffusers import StableDiffusionXLPipeline, AutoencoderKL
Finally, load the VAE file into AutoencoderKL.
link = "your vae's link"
model = "IDK-ab0ut/Yiffymix_v51"
vae = AutoencoderKL.from_single_file(link).to("cuda")
pipeline = StableDiffusionXLPipeline.from_pretrained(
model, vae=vae).to("cuda")
If you're using FP16 for the model, it's essential to also use FP16 for the VAE.
link = "your vae's link"
model = "IDK-ab0ut/Yiffymix_v51"
vae = AutoencoderKL.from_single_file(
link, torch_dtype=torch.float16).to("cuda")
pipeline = StableDiffusionXLPipeline.from_pretrained(
model, torch_dtype=torch.float16,
vae=vae).to("cuda")
For manual download, just fill the link
variable or any string variables you use to
load the VAE file with path directory of the .safetensors.
Troubleshooting π§
In case if you're experiencing HTTP404
error because
the program can't resolve your link, here's a simple fix.
First, download huggingface_hub using pip
.
!pip install --upgrade huggingface_hub
Import hf_hub_download() from huggingface_hub.
from huggingface_hub import hf_hub_download
Next, instead of direct link to the file, you want to use the repository ID.
repo = "username/model"
file = "the vae's file.safetensors"
vae = AutoencoderKL.from_single_file(hf_hub_download(repo_id=repo, filename=file)).to("cuda")
# use 'torch_dtype=torch.float16' for FP16.
# add 'subfolder="folder_name"' argument if the VAE is in specific folder.
From CivitAI π¨
It's trickier if the VAE is in CivitAI, because you can't use
from_single_file()
method. It only works for files inside HuggingFace. You can upload the VAE from there into
HuggingFace, but you must comply with the model's license before continuing. To solve this issue, you may
use wget
or curl
command to get the file from outside HuggingFace. (To be continued)