File size: 10,662 Bytes
e3bc6c5
1be63dc
9cd691f
 
 
 
 
e429b72
e3bc6c5
f8e3869
e3bc6c5
 
 
79a551a
8292604
2d3c20f
7ee0eea
2d8e9ba
f8e3869
 
b22e286
b082435
 
 
f8e3869
1f5cd7a
7ee0eea
 
 
 
ab2d4cc
ddb3125
 
7ee0eea
 
 
 
 
 
 
 
 
a587bee
 
a4808bf
 
f8e3869
e4a37ca
a29c61a
a4808bf
 
79a551a
a4808bf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28e651c
f8e3869
79a551a
f8e3869
79a551a
f5dab88
768bc1c
f5dab88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5f9e682
f5dab88
f8e3869
035e5b6
79a551a
f5dab88
 
 
 
 
 
 
 
 
 
 
 
 
 
5f9e682
4b31baf
 
 
5f9e682
 
f5dab88
 
 
79a551a
f8e3869
79a551a
 
 
 
1a384c7
 
 
 
 
 
 
 
 
 
 
 
79a551a
1a384c7
 
 
 
 
 
 
 
 
 
5f9e682
445cd81
193153e
c1c392d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f040422
6688d71
31654dc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
---
license: openrail++
language:
- en
pipeline_tag: text-to-image
tags:
- art
library_name: diffusers
---
# Overview📃✏️
This is a Diffusers-compatible version of [Yiffymix v51 by chilon249](https://civitai.com/models/3671?modelVersionId=658237).
 See the original page for more information.

 Keep in mind that this is [SDXL-Lightning](https://huggingface.co/ByteDance/SDXL-Lightning) checkpoint model, so using fewer steps (around 12 to 25) and low guidance scale (around 4 to 6) is recommended for the best result. It's also recommended to use clip skip of 2.

This repository uses DPM++ 2M Karras as its sampling method (Diffusers only). 

Check out the v52 [here](https://huggingface.co/IDK-ab0ut/Yiffymix_V52-XL).
# Diffusers Installation🧨
### Dependencies Installation📁
First, you'll need to install few dependencies. This is a one-time operation, you only need to run the code once.
```py
!pip install -q diffusers transformers accelerate
```
### Model Installation💿
After the installation, you can run SDXL with this repository using the code below:
```py
from diffusers import StableDiffusionXLPipeline
import torch

model = "IDK-ab0ut/Yiffymix_v51-XL"
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, torch_dtype=torch.float16).to("cuda")

prompt = "a cat, detailed background, dynamic lighting"
negative_prompt = "low resolution, bad quality, deformed"
steps = 25
guidance_scale = 4
image = pipeline(prompt=prompt, negative_prompt=negative_prompt,
        num_inference_steps=steps, guidance_scale=guidance_scale,
        clip_skip=2).images[0]
image
```

Feel free to edit the image's configuration with your desire.

# Scheduler's Customization⚙️
ㅤㅤㅤㅤ<small>🧨</small><b>For Diffusers</b><small>🧨</small>

You can see all available schedulers [here](https://huggingface.co/docs/diffusers/v0.11.0/en/api/schedulers/overview).

To use scheduler other than DPM++ 2M Karras for this repository, make sure to import the corresponding pipeline for the scheduler you want to use. For example, we want to use Euler. First, import [EulerDiscreteScheduler](https://huggingface.co/docs/diffusers/v0.29.2/en/api/schedulers/euler#diffusers.EulerDiscreteScheduler) from Diffusers by adding this line of code.
```py
from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler
```

Next step is to load the scheduler.
```py
model = "IDK-ab0ut/Yiffymix_v51"
euler = EulerDiscreteScheduler.from_pretrained(
        model, subfolder="scheduler")
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, scheduler=euler, torch.dtype=torch.float16
           ).to("cuda")
```
Now you can generate any images using the scheduler you want.

Another example is using DPM++ 2M SDE Karras. We want to import [DPMSolverMultistepScheduler](https://huggingface.co/docs/diffusers/v0.29.2/api/schedulers/multistep_dpm_solver) from Diffusers first.
```py
from diffusers import StableDiffusionXLPipeline, DPMSolverMultistepScheduler
```
Next, load the scheduler into the model.
```py
model = "IDK-ab0ut/Yiffymix_v51"
dpmsolver = DPMSolverMultistepScheduler.from_pretrained(
            model, subfolder="scheduler", use_karras_sigmas=True,
            algorithm_type="sde-dpmsolver++").to("cuda")
# 'use_karras_sigmas' is called to make the scheduler
# use Karras sigmas during sampling.
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, scheduler=dpmsolver, torch.dtype=torch.float16,
           ).to("cuda")
```
## Variational Autoencoder (VAE) Installation🖼
There are two ways to get [Variational Autoencoder (VAE)](https://huggingface.co/learn/computer-vision-course/en/unit5/generative-models/variational_autoencoders) file into the model. The first one is to download the file manually and the second one is to remotely download the file using code. In this repository, I'll explain the method of using code as the efficient way. First step is to download the VAE file. You can download the file manually or remotely, but I recommend you to use the remote one. Usually, VAE files are in .safetensors format. There are two websites you can visit to download VAE. Those are HuggingFace and [CivitAI](civitai.com).
### From HuggingFace😊
This method is pretty straightforward. Pick any VAE's repository you like. Then, navigate to "Files" and the VAE's file. Make sure to click the file.

Click the "Copy Download Link" for the file, you'll need this.

Next step is to load [AutoencoderKL](https://huggingface.co/docs/diffusers/en/api/models/autoencoderkl) pipeline into the code.
```py
from diffusers import StableDiffusionXLPipeline, AutoencoderKL
```
Finally, load the VAE file into [AutoencoderKL](https://huggingface.co/docs/diffusers/en/api/models/autoencoderkl).
```py
link = "your vae's link"
model = "IDK-ab0ut/Yiffymix_v51"
vae = AutoencoderKL.from_single_file(link).to("cuda")
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, vae=vae).to("cuda")
```

If you're using FP16 for the model, it's essential to also use FP16 for the VAE.
```py
link = "your vae's link"
model = "IDK-ab0ut/Yiffymix_v51"
vae = AutoencoderKL.from_single_file(
      link, torch_dtype=torch.float16).to("cuda")
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, torch_dtype=torch.float16,
           vae=vae).to("cuda")
```
For manual download, just fill the `link` variable or any string variables containing the link of the file with path directory of the .safetensors.

##### <b></small>Troubleshooting</b></small>🔧

In case if you're experiencing `HTTP404` error because the program can't resolve your link, here's a simple fix.

First, download [huggingface_hub](https://huggingface.co/docs/huggingface_hub/en/index) using `pip`.
```py
!pip install --upgrade huggingface_hub
```
Import [hf_hub_download()](https://huggingface.co/docs/huggingface_hub/en/guides/download) from [huggingface_hub](https://huggingface.co/docs/huggingface_hub/en/index).
```py
from huggingface_hub import hf_hub_download
```

Next, instead of direct link to the file, you want to use the repository ID.
```py
repo = "username/model"
file = "the vae's file.safetensors"
model = "IDK-ab0ut/Yiffymix_v51"
vae = AutoencoderKL.from_single_file(
      hf_hub_download(repo_id=repo,
      filename=file)).to("cuda")
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, vae=vae).to("cuda")
# use 'torch_dtype=torch.float16' for FP16.
# add 'subfolder="folder_name"' argument if the VAE is in specific folder.
```
You can use [hf_hub_download()](https://huggingface.co/docs/huggingface_hub/en/guides/download) from [huggingface_hub](https://huggingface.co/docs/huggingface_hub/en/index) casually without the need to check if previous method returns `HTTP404` error.
### From CivitAI🇨
It's trickier if the VAE is in [CivitAI](civitai.com), because you can't use `from_single_file()` method. It only works for files inside HuggingFace and local files only. You can upload the VAE from there into HuggingFace, but you must comply with the model's license before continuing. To solve this issue, you may use `wget` or `curl` command to get the file from outside HuggingFace.

Before downloading, to organize the VAE file you want to use and download, change the directory to save the downloaded model with `cd`.
Use `-O` option before specifying the file's link and name. It's the same thing for both `wget` and `curl`. 
```py
# For 'wget'
!cd <path>; wget -O [filename.safetensors] <link>

# For 'curl'
!cd <path>; curl -O [filename.safetensors] <link>

# Use only one of them. Replace "filename" with any
# name you want. If you run the code in Command Prompt or
# Windows Shell, you don't need the exclamation mark (!).
```

Since the file is now in your local directory, you can finally use `from_single_file()` method normally. Make sure to input the correct path for your VAE file. Load the VAE file into [AutoencoderKL](https://huggingface.co/docs/diffusers/en/api/models/autoencoderkl).
```py
path = "path to VAE" # Ends with .safetensors file format.
model = "IDK-ab0ut/Yiffymix_v51"
vae = AutoencoderKL.from_single_file(path).to("cuda")
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, vae=vae).to("cuda")

# Use 'torch_dtype=torch.float16' for both
# AutoencoderKL and SDXL pipeline for FP16.
```
<small> **Note**: You can use `wget` and `curl` method to download files from HuggingFace. </small>

Now you have it, loaded VAE from [CivitAI](civitai.com).

# Usage Restrictions📝
By using this repository, you agree to not use the model: 
```
ㅤ1. In any way that violates any applicable national, federal, state, local or international law or regulation.
ㅤ2. For the purpose of exploiting, harming or attempting to exploit or harm minors in any way. 
ㅤ3. To generate or disseminate verifiably false information and/or content with the purpose of harming others. 
ㅤ4. To generate or disseminate personal identifiable information that can be used to harm an individual. 
ㅤ5. To defame, disparage or otherwise harass others. 
ㅤ6. For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation. 
ㅤ7. For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics. 
ㅤ8. To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm.
ㅤ9. For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories. 
ㅤ10. To provide medical advice and medical results interpretation. 
ㅤ11. To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).
```
You shall use this model only for creative and artistic approach, without any intentions that may cause harm for others.
# That's all for this repository. Thank you for reading my silly note. Have a nice day!
#### Any helps or suggestions will be appreciated. Thank you!