Spaces:
Runtime error
Runtime error
upload README
Browse files
README.md
CHANGED
@@ -1,65 +1,9 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
- Gradio demo is available.
|
11 |
-
- [Hugging Face demo will be available]().
|
12 |
-
|
13 |
-
## Quick Start
|
14 |
-
### Step 1
|
15 |
-
```
|
16 |
-
# clone the repo
|
17 |
-
git clone https://github.com/Sierkinhane/VisorGPT.git
|
18 |
-
|
19 |
-
# go to directory
|
20 |
-
cd VisorGPT
|
21 |
-
|
22 |
-
# create a new environment
|
23 |
-
conda create -n visorgpt python=3.8
|
24 |
-
|
25 |
-
# activate the new environment
|
26 |
-
conda activate visorgpt
|
27 |
-
|
28 |
-
# prepare the basic environments
|
29 |
-
pip3 install -r requirements.txt
|
30 |
-
|
31 |
-
# install controlnet and gligen
|
32 |
-
cd demo/ControlNet
|
33 |
-
pip3 install -v -e .
|
34 |
-
cd ../demo/GLIGEN
|
35 |
-
pip3 install -v -e .
|
36 |
-
```
|
37 |
-
### Step 2 - Download pre-trained weights
|
38 |
-
Download [visorgpt](https://drive.google.com/file/d/1Pk4UPNKBMH-0uRLmK5COYTca7FUrN8XY/view?usp=share_link), [controlnet-pose2img](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_openpose.pth), [controlnet-sd](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.safetensors), [gligen-bbox2img](https://huggingface.co/gligen/gligen-generation-text-box/blob/main/diffusion_pytorch_model.bin), and put them as follow:
|
39 |
-
```
|
40 |
-
βββ demo/
|
41 |
-
| βββ ckpts
|
42 |
-
| | βββ controlnet
|
43 |
-
| | | βββ control_v11p_sd15_openpose.pth
|
44 |
-
| | | βββ v1-5-pruned-emaonly.safetensors
|
45 |
-
| | βββ gligen
|
46 |
-
| | | βββ diffusion_pytorch_model_box.bin
|
47 |
-
| | βββ visorgpt
|
48 |
-
| | | βββ visorgpt_dagger_ta_tb.pt
|
49 |
-
```
|
50 |
-
|
51 |
-
### Step 3 - Run demo
|
52 |
-
```
|
53 |
-
CUDA_VISIBLE_DEVICES=0 python3 gradio_demo.py
|
54 |
-
```
|
55 |
-
|
56 |
-
If you are using our code, please consider citing our paper.
|
57 |
-
|
58 |
-
```
|
59 |
-
@article{xie2023visorgpt,
|
60 |
-
title={VisorGPT: Learning Visual Prior via Generative Pre-Training},
|
61 |
-
author={Xie, Jinheng and Ye, Kai and Li, Yudong and Li, Yuexiang and Lin, Kevin Qinghong and Zheng, Yefeng and Shen, Linlin and Shou, Mike Zheng},
|
62 |
-
journal={arXiv preprint arXiv:2305.13777},
|
63 |
-
year={2023}
|
64 |
-
}
|
65 |
-
```
|
|
|
1 |
+
title: VisorGPT
|
2 |
+
emoji: π
|
3 |
+
colorFrom: blue
|
4 |
+
colorTo: red
|
5 |
+
sdk: gradio
|
6 |
+
sdk_version: 3.25.0
|
7 |
+
app_file: app.py
|
8 |
+
pinned: false
|
9 |
+
license: gpl-3.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|