File size: 8,340 Bytes
e3bc6c5
 
9cd691f
 
 
 
 
e3bc6c5
8296a97
e3bc6c5
 
 
9cd691f
e3bc6c5
 
7ee0eea
8292604
1f5cd7a
7ee0eea
de3909e
 
b22e286
b082435
 
 
ab2d4cc
1f5cd7a
7ee0eea
 
 
 
ab2d4cc
ddb3125
 
7ee0eea
 
 
 
 
 
 
 
 
a587bee
 
a4808bf
 
af23c0d
e4a37ca
a29c61a
a4808bf
 
1f5cd7a
a4808bf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28e651c
f5dab88
 
7e2c450
f5dab88
7e2c450
f5dab88
ec5d1c6
ee896de
f5dab88
 
 
768bc1c
f5dab88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9549f61
 
f5dab88
89ddec1
035e5b6
f5dab88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4b31baf
 
 
f5dab88
 
 
193153e
 
ee896de
b10d892
09d18e7
d36417a
1a384c7
 
 
6abd05c
 
1a384c7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
193153e
6688d71
31654dc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
---
license: creativeml-openrail-m
language:
- en
pipeline_tag: text-to-image
tags:
- art
---
# Overview πŸ“ƒβœοΈ
This is a Diffusers-compatible version of [Yiffymix v51 by chilon249](https://civitai.com/models/3671?modelVersionId=658237).
 See the original page for more information.

 Keep in mind that this is [SDXL-Lightning](https://huggingface.co/ByteDance/SDXL-Lightning) checkpoint model,
 so using fewer steps (around 12 to 25) and low guidance
 scale (around 4 to 6) is recommended for the best result.
 It's also recommended to use clip skip of 2.

This repository uses DPM++ 2M Karras as its sampler (Diffusers only). 

# Diffusers Installation 🧨
### Dependencies Installation πŸ“
First, you'll need to install few dependencies. This is a one-time operation, you only need to run the code once.
```py
!pip install -q diffusers transformers accelerate
```
### Model Installation πŸ’Ώ
After the installation, you can run SDXL with this repository using the code below:
```py
from diffusers import StableDiffusionXLPipeline
import torch

model = "IDK-ab0ut/Yiffymix_v51-XL"
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, torch_dtype=torch.float16).to("cuda")

prompt = "a cat, detailed background, dynamic lighting"
negative_prompt = "low resolution, bad quality, deformed"
steps = 25
guidance_scale = 4
image = pipeline(prompt=prompt, negative_prompt=negative_prompt,
        num_inference_steps=steps, guidance_scale=guidance_scale,
        clip_skip=2).images[0]
image
```

Feel free to edit the image's configuration with your desire.

# Scheduler's Customization βš™οΈ
γ…€γ…€γ…€γ…€<small>🧨</small><b>For Diffusers</b><small>🧨</small>

You can see all available schedulers [here](https://huggingface.co/docs/diffusers/v0.11.0/en/api/schedulers/overview).

To use scheduler other than DPM++ 2M Karras for this repository, make sure to import the
corresponding pipeline for the scheduler you want to use. For example, we want to use Euler. First, import [EulerDiscreteScheduler](https://huggingface.co/docs/diffusers/v0.29.2/en/api/schedulers/euler#diffusers.EulerDiscreteScheduler) from Diffusers by adding this line of code.
```py
from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler
```

Next step is to load the scheduler.
```py
model = "IDK-ab0ut/Yiffymix_v51"
euler = EulerDiscreteScheduler.from_pretrained(
        model, subfolder="scheduler")
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, scheduler=euler, torch.dtype=torch.float16
           ).to("cuda")
```
Now you can generate any images using the scheduler you want.

Another example is using DPM++ 2M SDE Karras. We want to import [DPMSolverMultistepScheduler](https://huggingface.co/docs/diffusers/v0.29.2/api/schedulers/multistep_dpm_solver) from Diffusers first.
```py
from diffusers import StableDiffusionXLPipeline, DPMSolverMultistepScheduler
```
Next, load the scheduler into the model.
```py
model = "IDK-ab0ut/Yiffymix_v51"
dpmsolver = DPMSolverMultistepScheduler.from_pretrained(
            model, subfolder="scheduler", use_karras_sigmas=True,
            algorithm_type="sde-dpmsolver++").to("cuda")
# 'use_karras_sigmas' is called to make the scheduler
# use Karras sigmas during sampling.
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, scheduler=dpmsolver, torch.dtype=torch.float16,
           ).to("cuda")
```
## Variational Autoencoder (VAE) Installation πŸ–Ό
There are two ways to get [Variational Autoencoder (VAE)](https://huggingface.co/learn/computer-vision-course/en/unit5/generative-models/variational_autoencoders) file into the model. The first one
is to download the file manually and the second one is to remotely download the file using code. In this repository,
I'll explain the method of using code as the efficient way. First step is to download the VAE file.
You can download the file manually or remotely, but I recommend you to use the remote one. Usually, VAE
files are in .safetensors format. There are two websites you can visit to download VAE. Those are HuggingFace
and [CivitAI](civitai.com).
### From HuggingFace 😊
This method is pretty straightforward. Pick any VAE's repository you like. Then, navigate to "Files" and
the VAE's file. Make sure to click the file.

Click the "Copy Download Link" for the file, you'll need this.

Next step is to load [AutoencoderKL](https://huggingface.co/docs/diffusers/en/api/models/autoencoderkl) pipeline into the code.
```py
from diffusers import StableDiffusionXLPipeline, AutoencoderKL
```
Finally, load the VAE file into [AutoencoderKL](https://huggingface.co/docs/diffusers/en/api/models/autoencoderkl).
```py
link = "your vae's link"
model = "IDK-ab0ut/Yiffymix_v51"
vae = AutoencoderKL.from_single_file(link).to("cuda")
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, vae=vae).to("cuda")
```

If you're using FP16 for the model, it's essential to also use FP16 for the VAE.
```py
link = "your vae's link"
model = "IDK-ab0ut/Yiffymix_v51"
vae = AutoencoderKL.from_single_file(
      link, torch_dtype=torch.float16).to("cuda")
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, torch_dtype=torch.float16,
           vae=vae).to("cuda")
```
For manual download, just fill the `link` variable or any string variables you use to
load the VAE file with path directory of the .safetensors.

##### <b></small>Troubleshooting</b></small> πŸ”§

In case if you're experiencing `HTTP404` error because
the program can't resolve your link, here's a simple fix.

First, download [huggingface_hub](https://huggingface.co/docs/huggingface_hub/en/index) using `pip`.
```py
!pip install --upgrade huggingface_hub
```
Import [hf_hub_download()](https://huggingface.co/docs/huggingface_hub/en/guides/download) from [huggingface_hub](https://huggingface.co/docs/huggingface_hub/en/index).
```py
from huggingface_hub import hf_hub_download
```

Next, instead of direct link to the file, you want to use the repository ID.
```py
repo = "username/model"
file = "the vae's file.safetensors"
vae = AutoencoderKL.from_single_file(
      hf_hub_download(repo_id=repo,
      filename=file)).to("cuda")
# use 'torch_dtype=torch.float16' for FP16.
# add 'subfolder="folder_name"' argument if the VAE is in specific folder.
```
You can use [hf_hub_download()](https://huggingface.co/docs/huggingface_hub/en/guides/download) from [huggingface_hub](https://huggingface.co/docs/huggingface_hub/en/index)
casually without the need to check if previous method returns `HTTP404` error.
### From CivitAI πŸ‡¨
It's trickier if the VAE is in [CivitAI](civitai.com), because you can't use
`from_single_file()` method. It only works for files inside HuggingFace and local files only. You can upload the VAE from there into
HuggingFace, but you must comply with the model's license before continuing. To solve this issue, you may
use `wget` or `curl` command to get the file from outside HuggingFace.

Before downloading, to organize the VAE file you want to use and download, change
the directory to save the downloaded model with `cd`.
Use `-O` option before specifying the file's link and name. It's the same thing
for both `wget` and `curl`. 
```py
# For 'wget'
!cd <path>; wget -O [filename.safetensors] <link>

# For 'curl'
!cd <path>; curl -O [filename.safetensors] <link>

# Use only one of them. Replace "filename" with any
# name you want. If you run the code in Command Prompt or
# Windows Shell, you don't need the exclamation mark (!).
```

Since the file is now in your local directory, you can
finally use `from_single_file()` method normally. Make sure to
input the correct path for your VAE file. Load the VAE file into [AutoencoderKL](https://huggingface.co/docs/diffusers/en/api/models/autoencoderkl).
```py
path = "path to VAE" # Ends with .safetensors file format.
model = "IDK-ab0ut/Yiffymix_v51"
vae = AutoencoderKL.from_single_file(path).to("cuda")
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, vae=vae).to("cuda")

# Use 'torch_dtype=torch.float16' for both
# AutoencoderKL and SDXL pipeline for FP16.
```
Now you have it, loaded VAE from [CivitAI](civitai.com).
# That's all for this repository. Thank you for reading my silly note. Have a nice day!
#### Any helps or suggestions will be appreciated. Thank you!