MOFA-demo / README.md
David Krajewski
Added code to DL models
0528815
|
raw
history blame
1.25 kB

Environment Setup

pip install -r requirements.txt

Download checkpoints

  1. Download the pretrained checkpoints of SVD_xt from huggingface to ./ckpts.

  2. Download the checkpoint of MOFA-Adapter from huggingface to ./ckpts.

  3. Download the checkpoint of CMP from here and put it into ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints.

The final structure of checkpoints should be:

./ckpts/
|-- controlnet
|   |-- config.json
|   `-- diffusion_pytorch_model.safetensors
|-- stable-video-diffusion-img2vid-xt-1-1
|   |-- feature_extractor
|       |-- ...
|   |-- image_encoder
|       |-- ...
|   |-- scheduler
|       |-- ...
|   |-- unet
|       |-- ...
|   |-- unet_ch9
|       |-- ...
|   |-- vae
|       |-- ...
|   |-- svd_xt_1_1.safetensors
|   `-- model_index.json

Run Gradio Demo

python run_gradio.py

Please refer to the instructions on the gradio interface during the inference process.