Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ tags:
|
|
7 |
- art
|
8 |
library_name: diffusers
|
9 |
---
|
10 |
-
# Overview
|
11 |
This is a Diffusers-compatible version of [Yiffymix v51 by chilon249](https://civitai.com/models/3671?modelVersionId=658237).
|
12 |
See the original page for more information.
|
13 |
|
@@ -15,13 +15,13 @@ This is a Diffusers-compatible version of [Yiffymix v51 by chilon249](https://ci
|
|
15 |
|
16 |
This repository uses DPM++ 2M Karras as its sampler (Diffusers only).
|
17 |
|
18 |
-
# Diffusers Installation
|
19 |
-
### Dependencies Installation
|
20 |
First, you'll need to install few dependencies. This is a one-time operation, you only need to run the code once.
|
21 |
```py
|
22 |
!pip install -q diffusers transformers accelerate
|
23 |
```
|
24 |
-
### Model Installation
|
25 |
After the installation, you can run SDXL with this repository using the code below:
|
26 |
```py
|
27 |
from diffusers import StableDiffusionXLPipeline
|
@@ -43,7 +43,7 @@ image
|
|
43 |
|
44 |
Feel free to edit the image's configuration with your desire.
|
45 |
|
46 |
-
# Scheduler's Customization
|
47 |
ㅤㅤㅤㅤ<small>🧨</small><b>For Diffusers</b><small>🧨</small>
|
48 |
|
49 |
You can see all available schedulers [here](https://huggingface.co/docs/diffusers/v0.11.0/en/api/schedulers/overview).
|
@@ -80,9 +80,9 @@ pipeline = StableDiffusionXLPipeline.from_pretrained(
|
|
80 |
model, scheduler=dpmsolver, torch.dtype=torch.float16,
|
81 |
).to("cuda")
|
82 |
```
|
83 |
-
## Variational Autoencoder (VAE) Installation
|
84 |
There are two ways to get [Variational Autoencoder (VAE)](https://huggingface.co/learn/computer-vision-course/en/unit5/generative-models/variational_autoencoders) file into the model. The first one is to download the file manually and the second one is to remotely download the file using code. In this repository, I'll explain the method of using code as the efficient way. First step is to download the VAE file. You can download the file manually or remotely, but I recommend you to use the remote one. Usually, VAE files are in .safetensors format. There are two websites you can visit to download VAE. Those are HuggingFace and [CivitAI](civitai.com).
|
85 |
-
### From HuggingFace
|
86 |
This method is pretty straightforward. Pick any VAE's repository you like. Then, navigate to "Files" and the VAE's file. Make sure to click the file.
|
87 |
|
88 |
Click the "Copy Download Link" for the file, you'll need this.
|
@@ -112,7 +112,7 @@ pipeline = StableDiffusionXLPipeline.from_pretrained(
|
|
112 |
```
|
113 |
For manual download, just fill the `link` variable or any string variables you use to load the VAE file with path directory of the .safetensors.
|
114 |
|
115 |
-
##### <b></small>Troubleshooting</b></small
|
116 |
|
117 |
In case if you're experiencing `HTTP404` error because the program can't resolve your link, here's a simple fix.
|
118 |
|
@@ -136,7 +136,7 @@ vae = AutoencoderKL.from_single_file(
|
|
136 |
# add 'subfolder="folder_name"' argument if the VAE is in specific folder.
|
137 |
```
|
138 |
You can use [hf_hub_download()](https://huggingface.co/docs/huggingface_hub/en/guides/download) from [huggingface_hub](https://huggingface.co/docs/huggingface_hub/en/index) casually without the need to check if previous method returns `HTTP404` error.
|
139 |
-
### From CivitAI
|
140 |
It's trickier if the VAE is in [CivitAI](civitai.com), because you can't use `from_single_file()` method. It only works for files inside HuggingFace and local files only. You can upload the VAE from there into HuggingFace, but you must comply with the model's license before continuing. To solve this issue, you may use `wget` or `curl` command to get the file from outside HuggingFace.
|
141 |
|
142 |
Before downloading, to organize the VAE file you want to use and download, change the directory to save the downloaded model with `cd`.
|
|
|
7 |
- art
|
8 |
library_name: diffusers
|
9 |
---
|
10 |
+
# Overview📃✏️
|
11 |
This is a Diffusers-compatible version of [Yiffymix v51 by chilon249](https://civitai.com/models/3671?modelVersionId=658237).
|
12 |
See the original page for more information.
|
13 |
|
|
|
15 |
|
16 |
This repository uses DPM++ 2M Karras as its sampler (Diffusers only).
|
17 |
|
18 |
+
# Diffusers Installation🧨
|
19 |
+
### Dependencies Installation📁
|
20 |
First, you'll need to install few dependencies. This is a one-time operation, you only need to run the code once.
|
21 |
```py
|
22 |
!pip install -q diffusers transformers accelerate
|
23 |
```
|
24 |
+
### Model Installation💿
|
25 |
After the installation, you can run SDXL with this repository using the code below:
|
26 |
```py
|
27 |
from diffusers import StableDiffusionXLPipeline
|
|
|
43 |
|
44 |
Feel free to edit the image's configuration with your desire.
|
45 |
|
46 |
+
# Scheduler's Customization⚙️
|
47 |
ㅤㅤㅤㅤ<small>🧨</small><b>For Diffusers</b><small>🧨</small>
|
48 |
|
49 |
You can see all available schedulers [here](https://huggingface.co/docs/diffusers/v0.11.0/en/api/schedulers/overview).
|
|
|
80 |
model, scheduler=dpmsolver, torch.dtype=torch.float16,
|
81 |
).to("cuda")
|
82 |
```
|
83 |
+
## Variational Autoencoder (VAE) Installation🖼
|
84 |
There are two ways to get [Variational Autoencoder (VAE)](https://huggingface.co/learn/computer-vision-course/en/unit5/generative-models/variational_autoencoders) file into the model. The first one is to download the file manually and the second one is to remotely download the file using code. In this repository, I'll explain the method of using code as the efficient way. First step is to download the VAE file. You can download the file manually or remotely, but I recommend you to use the remote one. Usually, VAE files are in .safetensors format. There are two websites you can visit to download VAE. Those are HuggingFace and [CivitAI](civitai.com).
|
85 |
+
### From HuggingFace😊
|
86 |
This method is pretty straightforward. Pick any VAE's repository you like. Then, navigate to "Files" and the VAE's file. Make sure to click the file.
|
87 |
|
88 |
Click the "Copy Download Link" for the file, you'll need this.
|
|
|
112 |
```
|
113 |
For manual download, just fill the `link` variable or any string variables you use to load the VAE file with path directory of the .safetensors.
|
114 |
|
115 |
+
##### <b></small>Troubleshooting</b></small>🔧
|
116 |
|
117 |
In case if you're experiencing `HTTP404` error because the program can't resolve your link, here's a simple fix.
|
118 |
|
|
|
136 |
# add 'subfolder="folder_name"' argument if the VAE is in specific folder.
|
137 |
```
|
138 |
You can use [hf_hub_download()](https://huggingface.co/docs/huggingface_hub/en/guides/download) from [huggingface_hub](https://huggingface.co/docs/huggingface_hub/en/index) casually without the need to check if previous method returns `HTTP404` error.
|
139 |
+
### From CivitAI🇨
|
140 |
It's trickier if the VAE is in [CivitAI](civitai.com), because you can't use `from_single_file()` method. It only works for files inside HuggingFace and local files only. You can upload the VAE from there into HuggingFace, but you must comply with the model's license before continuing. To solve this issue, you may use `wget` or `curl` command to get the file from outside HuggingFace.
|
141 |
|
142 |
Before downloading, to organize the VAE file you want to use and download, change the directory to save the downloaded model with `cd`.
|