Spaces:
Runtime error
Runtime error
File size: 8,222 Bytes
fb6c2da |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 |
<p align="center">
<img src="assets/logo2.png" height=65>
</p>
<div align="center">
β¬[**Download Models**](#-download-models) **|** π»[**How to Test**](#-how-to-test)
</div>
Official implementation of T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models.
#### [Paper](https://arxiv.org/abs/2302.08453)
<p align="center">
<img src="assets/overview1.png" height=250>
</p>
We propose T2I-Adapter, a **simple and small (~70M parameters, ~300M storage space)** network that can provide extra guidance to pre-trained text-to-image models while **freezing** the original large text-to-image models.
T2I-Adapter aligns internal knowledge in T2I models with external control signals.
We can train various adapters according to different conditions, and achieve rich control and editing effects.
<p align="center">
<img src="assets/teaser.png" height=500>
</p>
### β¬ Download Models
Put the downloaded models in the `T2I-Adapter/models` folder.
1. The **T2I-Adapters** can be download from <https://huggingface.co/TencentARC/T2I-Adapter>.
2. The pretrained **Stable Diffusion v1.4** models can be download from <https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/tree/main>. You need to download the `sd-v1-4.ckpt
` file.
3. [Optional] If you want to use **Anything v4.0** models, you can download the pretrained models from <https://huggingface.co/andite/anything-v4.0/tree/main>. You need to download the `anything-v4.0-pruned.ckpt` file.
4. The pretrained **clip-vit-large-patch14** folder can be download from <https://huggingface.co/openai/clip-vit-large-patch14/tree/main>. Remember to download the whole folder!
5. The pretrained keypose detection models include FasterRCNN (human detection) from <https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth> and HRNet (pose detection) from <https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth>.
After downloading, the folder structure should be like this:
<p align="center">
<img src="assets/downloaded_models.png" height=100>
</p>
### π§ Dependencies and Installation
- Python >= 3.6 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
- [PyTorch >= 1.4](https://pytorch.org/)
```bash
pip install -r requirements.txt
```
- If you want to use the full function of keypose-guided generation, you need to install MMPose. For details please refer to <https://github.com/open-mmlab/mmpose>.
### π» How to Test
- The results are in the `experiments` folder.
- If you want to use the `Anything v4.0`, please add `--ckpt models/anything-v4.0-pruned.ckpt` in the following commands.
#### **For Simple Experience**
> python app.py
#### **Sketch Adapter**
- Sketch to Image Generation
> python test_sketch.py --plms --auto_resume --prompt "A car with flying wings" --path_cond examples/sketch/car.png --ckpt models/sd-v1-4.ckpt --type_in sketch
- Image to Image Generation
> python test_sketch.py --plms --auto_resume --prompt "A beautiful girl" --path_cond examples/anything_sketch/human.png --ckpt models/sd-v1-4.ckpt --type_in image
- Generation with **Anything** setting
> python test_sketch.py --plms --auto_resume --prompt "A beautiful girl" --path_cond examples/anything_sketch/human.png --ckpt models/anything-v4.0-pruned.ckpt --type_in image
##### Gradio Demo
<p align="center">
<img src="assets/gradio_sketch.png">
</p>
You can use gradio to experience all these three functions at once. CPU is also supported by setting device to 'cpu'.
```bash
python gradio_sketch.py
```
#### **Keypose Adapter**
- Keypose to Image Generation
> python test_keypose.py --plms --auto_resume --prompt "A beautiful girl" --path_cond examples/keypose/iron.png --type_in pose
- Image to Image Generation
> python test_keypose.py --plms --auto_resume --prompt "A beautiful girl" --path_cond examples/sketch/human.png --type_in image
- Generation with **Anything** setting
> python test_keypose.py --plms --auto_resume --prompt "A beautiful girl" --path_cond examples/sketch/human.png --ckpt models/anything-v4.0-pruned.ckpt --type_in image
##### Gradio Demo
<p align="center">
<img src="assets/gradio_keypose.png">
</p>
You can use gradio to experience all these three functions at once. CPU is also supported by setting device to 'cpu'.
```bash
python gradio_keypose.py
```
#### **Segmentation Adapter**
> python test_seg.py --plms --auto_resume --prompt "A black Honda motorcycle parked in front of a garage" --path_cond examples/seg/motor.png
#### **Two adapters: Segmentation and Sketch Adapters**
> python test_seg_sketch.py --plms --auto_resume --prompt "An all white kitchen with an electric stovetop" --path_cond examples/seg_sketch/mask.png --path_cond2 examples/seg_sketch/edge.png
#### **Local editing with adapters**
> python test_sketch_edit.py --plms --auto_resume --prompt "A white cat" --path_cond examples/edit_cat/edge_2.png --path_x0 examples/edit_cat/im.png --path_mask examples/edit_cat/mask.png
## Stable Diffusion + T2I-Adapters (only ~70M parameters, ~300M storage space)
The following is the detailed structure of a **Stable Diffusion** model with the **T2I-Adapter**.
<p align="center">
<img src="assets/overview2.png" height=300>
</p>
<!-- ## Web Demo
* All the usage of three T2I-Adapters (i.e, sketch, keypose and segmentation) are integrated into [Huggingface Spaces]() π€ using [Gradio](). Have fun with the Web Demo. -->
## π Interesting Applications
### Stable Diffusion results guided with the sketch T2I-Adapter
The corresponding edge maps are predicted by PiDiNet. The sketch T2I-Adapter can well generalize to other similar sketch types, for example, sketches from the Internet and user scribbles.
<p align="center">
<img src="assets/sketch_base.png" height=800>
</p>
### Stable Diffusion results guided with the keypose T2I-Adapter
The keypose results predicted by the [MMPose](https://github.com/open-mmlab/mmpose).
With the keypose guidance, the keypose T2I-Adapter can also help to generate animals with the same keypose, for example, pandas and tigers.
<p align="center">
<img src="assets/keypose_base.png" height=600>
</p>
### T2I-Adapter with Anything-v4.0
Once the T2I-Adapter is trained, it can act as a **plug-and-play module** and can be seamlessly integrated into the finetuned diffusion models **without re-training**, for example, Anything-4.0.
#### β¨ Anything results with the plug-and-play sketch T2I-Adapter (no extra training)
<p align="center">
<img src="assets/sketch_anything.png" height=600>
</p>
#### Anything results with the plug-and-play keypose T2I-Adapter (no extra training)
<p align="center">
<img src="assets/keypose_anything.png" height=600>
</p>
### Local editing with the sketch adapter
When combined with the inpaiting mode of Stable Diffusion, we can realize local editing with user specific guidance.
#### β¨ Change the head direction of the cat
<p align="center">
<img src="assets/local_editing_cat.png" height=300>
</p>
#### β¨ Add rabbit ears on the head of the Iron Man.
<p align="center">
<img src="assets/local_editing_ironman.png" height=400>
</p>
### Combine different concepts with adapter
Adapter can be used to enhance the SD ability to combine different concepts.
#### β¨ A car with flying wings. / A doll in the shape of letter βAβ.
<p align="center">
<img src="assets/enhance_SD2.png" height=600>
</p>
### Sequential editing with the sketch adapter
We can realize the sequential editing with the adapter guidance.
<p align="center">
<img src="assets/sequential_edit.png">
</p>
### Composable Guidance with multiple adapters
Stable Diffusion results guided with the segmentation and sketch adapters together.
<p align="center">
<img src="assets/multiple_adapters.png">
</p>
![visitors](https://visitor-badge.glitch.me/badge?page_id=TencentARC/T2I-Adapter)
Logo materials: [adapter](https://www.flaticon.com/free-icon/adapter_4777242), [lightbulb](https://www.flaticon.com/free-icon/lightbulb_3176369)
|