Update README.md
Browse files
README.md
CHANGED
@@ -7,3 +7,21 @@ license: openrail
|
|
7 |
π¬ Original paper and models by https://github.com/vislearn/ControlNet-XS
|
8 |
|
9 |
π·π½ββοΈ Translated into diffusers architecture by https://twitter.com/UmerHAdil
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
π¬ Original paper and models by https://github.com/vislearn/ControlNet-XS
|
8 |
|
9 |
π·π½ββοΈ Translated into diffusers architecture by https://twitter.com/UmerHAdil
|
10 |
+
|
11 |
+
This model is trained for use with [StableDiffusionXL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
|
12 |
+
|
13 |
+
---
|
14 |
+
|
15 |
+
ControlNet-XS was introduced in [ControlNet-XS](https://vislearn.github.io/ControlNet-XS/) by Denis Zavadski and Carsten Rother. It is based on the observation that the control model in the [original ControlNet](https://huggingface.co/papers/2302.05543) can be made much smaller and still produces good results.
|
16 |
+
|
17 |
+
As with the original ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process.
|
18 |
+
|
19 |
+
Using ControlNet-XS instead of regular ControlNet will produce images of roughly the same quality, but 20-25% faster ([see benchmark](https://github.com/UmerHA/controlnet-xs-benchmark/blob/main/Speed%20Benchmark.ipynb)) and with ~45% less memory usage.
|
20 |
+
|
21 |
+
---
|
22 |
+
|
23 |
+
Other ControlNet-XS models:
|
24 |
+
|
25 |
+
- [StableDiffusion-XL and depth input](https://huggingface.co/UmerHA/ConrolNetXS-SDXL-depth)
|
26 |
+
- [StableDiffusion 2.1 and canny edges input](https://huggingface.co/UmerHA/ConrolNetXS-SD2.1-canny)
|
27 |
+
- [StableDiffusion 2.1 and depth input](https://huggingface.co/UmerHA/ConrolNetXS-SD2.1-depth)
|