Spaces:
Running
Running
File size: 1,251 Bytes
0528815 2d3882a 0528815 2d3882a 0528815 2d3882a 0528815 2d3882a 0528815 155e38f 0528815 0bdcd3e 0528815 0bdcd3e b6c7638 0528815 41a9b74 2d3882a 0528815 992d625 0528815 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
## Environment Setup
`pip install -r requirements.txt`
## Download checkpoints
1. Download the pretrained checkpoints of [SVD_xt](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt-1-1) from huggingface to `./ckpts`.
2. Download the checkpoint of [MOFA-Adapter](https://huggingface.co/MyNiuuu/MOFA-Video-Traj) from huggingface to `./ckpts`.
3. Download the checkpoint of CMP from [here](https://huggingface.co/MyNiuuu/MOFA-Video-Traj/blob/main/models/cmp/experiments/semiauto_annot/resnet50_vip%2Bmpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar) and put it into `./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints`.
The final structure of checkpoints should be:
```text
./ckpts/
|-- controlnet
| |-- config.json
| `-- diffusion_pytorch_model.safetensors
|-- stable-video-diffusion-img2vid-xt-1-1
| |-- feature_extractor
| |-- ...
| |-- image_encoder
| |-- ...
| |-- scheduler
| |-- ...
| |-- unet
| |-- ...
| |-- unet_ch9
| |-- ...
| |-- vae
| |-- ...
| |-- svd_xt_1_1.safetensors
| `-- model_index.json
```
## Run Gradio Demo
`python run_gradio.py`
Please refer to the instructions on the gradio interface during the inference process. |