Update README.md
Browse files
README.md
CHANGED
@@ -110,7 +110,7 @@ pipeline = StableDiffusionXLPipeline.from_pretrained(
|
|
110 |
model, torch_dtype=torch.float16,
|
111 |
vae=vae).to("cuda")
|
112 |
```
|
113 |
-
For manual download, just fill the `link` variable or any string variables
|
114 |
|
115 |
##### <b></small>Troubleshooting</b></small>🔧
|
116 |
|
@@ -129,9 +129,12 @@ Next, instead of direct link to the file, you want to use the repository ID.
|
|
129 |
```py
|
130 |
repo = "username/model"
|
131 |
file = "the vae's file.safetensors"
|
|
|
132 |
vae = AutoencoderKL.from_single_file(
|
133 |
hf_hub_download(repo_id=repo,
|
134 |
filename=file)).to("cuda")
|
|
|
|
|
135 |
# use 'torch_dtype=torch.float16' for FP16.
|
136 |
# add 'subfolder="folder_name"' argument if the VAE is in specific folder.
|
137 |
```
|
@@ -164,6 +167,7 @@ pipeline = StableDiffusionXLPipeline.from_pretrained(
|
|
164 |
# Use 'torch_dtype=torch.float16' for both
|
165 |
# AutoencoderKL and SDXL pipeline for FP16.
|
166 |
```
|
|
|
167 |
Now you have it, loaded VAE from [CivitAI](civitai.com).
|
168 |
# That's all for this repository. Thank you for reading my silly note. Have a nice day!
|
169 |
#### Any helps or suggestions will be appreciated. Thank you!
|
|
|
110 |
model, torch_dtype=torch.float16,
|
111 |
vae=vae).to("cuda")
|
112 |
```
|
113 |
+
For manual download, just fill the `link` variable or any string variables containing the link of the file with path directory of the .safetensors.
|
114 |
|
115 |
##### <b></small>Troubleshooting</b></small>🔧
|
116 |
|
|
|
129 |
```py
|
130 |
repo = "username/model"
|
131 |
file = "the vae's file.safetensors"
|
132 |
+
model = "IDK-ab0ut/Yiffymix_v51"
|
133 |
vae = AutoencoderKL.from_single_file(
|
134 |
hf_hub_download(repo_id=repo,
|
135 |
filename=file)).to("cuda")
|
136 |
+
pipeline = StableDiffusionXLPipeline.from_pretrained(
|
137 |
+
model, vae=vae).to("cuda")
|
138 |
# use 'torch_dtype=torch.float16' for FP16.
|
139 |
# add 'subfolder="folder_name"' argument if the VAE is in specific folder.
|
140 |
```
|
|
|
167 |
# Use 'torch_dtype=torch.float16' for both
|
168 |
# AutoencoderKL and SDXL pipeline for FP16.
|
169 |
```
|
170 |
+
<small> **Note**: You can use `wget` and `curl` method to download files from HuggingFace. </small>
|
171 |
Now you have it, loaded VAE from [CivitAI](civitai.com).
|
172 |
# That's all for this repository. Thank you for reading my silly note. Have a nice day!
|
173 |
#### Any helps or suggestions will be appreciated. Thank you!
|