Spaces:
Running
Running
David Krajewski
commited on
Commit
β’
0528815
1
Parent(s):
41a9b74
Added code to DL models
Browse filesThis view is limited to 50 files because it contains too many changes. Β
See raw diff
- MOFA-Video-Traj/README.md +0 -42
- README.md +31 -74
- MOFA-Video-Traj/run_gradio.py β app.py +16 -2
- assets/images/README.md +0 -1
- assets/images/project-mofa.png +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/config.yaml +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/resume.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/resume_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/train.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/train_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/validate.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/validate_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/config.yaml +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/resume.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/resume_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/train.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/train_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/validate.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/validate_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/config.yaml +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/resume.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/resume_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/train.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/train_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/validate.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/validate_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/config.yaml +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/resume.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/resume_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/train.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/train_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/validate.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/validate_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/config.yaml +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/resume.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/resume_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/train.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/train_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/validate.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/validate_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/config.yaml +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/resume.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/resume_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/train.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/train_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/validate.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/validate_slurm.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/config.yaml +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/resume.sh +0 -0
- {MOFA-Video-Traj/models β models}/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/resume_slurm.sh +0 -0
MOFA-Video-Traj/README.md
DELETED
@@ -1,42 +0,0 @@
|
|
1 |
-
## Environment Setup
|
2 |
-
|
3 |
-
`pip install -r requirements.txt`
|
4 |
-
|
5 |
-
## Download checkpoints
|
6 |
-
|
7 |
-
1. Download the pretrained checkpoints of [SVD_xt](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt-1-1) from huggingface to `./ckpts`.
|
8 |
-
|
9 |
-
2. Download the checkpoint of [MOFA-Adapter](https://huggingface.co/MyNiuuu/MOFA-Video-Traj) from huggingface to `./ckpts`.
|
10 |
-
|
11 |
-
3. Download the checkpoint of CMP from [here](https://huggingface.co/MyNiuuu/MOFA-Video-Traj/blob/main/models/cmp/experiments/semiauto_annot/resnet50_vip%2Bmpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar) and put it into `./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints`.
|
12 |
-
|
13 |
-
The final structure of checkpoints should be:
|
14 |
-
|
15 |
-
|
16 |
-
```text
|
17 |
-
./ckpts/
|
18 |
-
|-- controlnet
|
19 |
-
| |-- config.json
|
20 |
-
| `-- diffusion_pytorch_model.safetensors
|
21 |
-
|-- stable-video-diffusion-img2vid-xt-1-1
|
22 |
-
| |-- feature_extractor
|
23 |
-
| |-- ...
|
24 |
-
| |-- image_encoder
|
25 |
-
| |-- ...
|
26 |
-
| |-- scheduler
|
27 |
-
| |-- ...
|
28 |
-
| |-- unet
|
29 |
-
| |-- ...
|
30 |
-
| |-- unet_ch9
|
31 |
-
| |-- ...
|
32 |
-
| |-- vae
|
33 |
-
| |-- ...
|
34 |
-
| |-- svd_xt_1_1.safetensors
|
35 |
-
| `-- model_index.json
|
36 |
-
```
|
37 |
-
|
38 |
-
## Run Gradio Demo
|
39 |
-
|
40 |
-
`python run_gradio.py`
|
41 |
-
|
42 |
-
Please refer to the instructions on the gradio interface during the inference process.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
@@ -1,85 +1,42 @@
|
|
|
|
1 |
|
|
|
2 |
|
|
|
3 |
|
|
|
4 |
|
5 |
-
|
6 |
-
<h1>
|
7 |
-
π¦οΈ MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model
|
8 |
-
</h1>
|
9 |
-
<a href='https://arxiv.org/abs/2405.20222'><img src='https://img.shields.io/badge/ArXiv-PDF-red'></a> <a href='https://myniuuu.github.io/MOFA_Video'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://huggingface.co/MyNiuuu/MOFA-Video-Traj'><img src='https://img.shields.io/badge/π€ huggingface-MOFA_Traj-blue'></a>
|
10 |
-
<div>
|
11 |
-
<a href='https://myniuuu.github.io/' target='_blank'>Muyao Niu</a> <sup>1,2</sup>
|
12 |
-
<a href='https://vinthony.github.io/academic/' target='_blank'>Xiaodong Cun</a><sup>2,*</sup>
|
13 |
-
<a href='https://xinntao.github.io/' target='_blank'>Xintao Wang</a><sup>2</sup>
|
14 |
-
<a href='https://yzhang2016.github.io/' target='_blank'>Yong Zhang</a><sup>2</sup> <br>
|
15 |
-
<a href='https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en' target='_blank'>Ying Shan</a><sup>2</sup>
|
16 |
-
<a href='https://scholar.google.com/citations?user=JD-5DKcAAAAJ&hl=en' target='_blank'>Yinqiang Zheng</a><sup>1,*</sup>
|
17 |
-
</div>
|
18 |
-
<div>
|
19 |
-
<sup>1</sup> The University of Tokyo <sup>2</sup> Tencent AI Lab <sup>*</sup> Corresponding Author
|
20 |
-
</div>
|
21 |
-
</div>
|
22 |
|
23 |
-
|
24 |
|
25 |
-
|
26 |
-
Check the gallery of our <a href='https://myniuuu.github.io/MOFA_Video' target='_blank'>project page</a> for many visual results!
|
27 |
-
</div>
|
28 |
|
29 |
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
</h3>
|
51 |
-
</div>
|
52 |
-
|
53 |
-
<div align="center">
|
54 |
-
<img src="assets/images/project-mofa.png">
|
55 |
-
</div>
|
56 |
-
|
57 |
-
We introduce MOFA-Video, a method designed to adapt motions from different domains to the frozen Video Diffusion Model. By employing <u>sparse-to-dense (S2D) motion generation</u> and <u>flow-based motion adaptation</u>, MOFA-Video can effectively animate a single image using various types of control signals, including trajectories, keypoint sequences, AND their combinations.
|
58 |
-
|
59 |
-
<p align="center">
|
60 |
-
<img src="assets/images/pipeline.png">
|
61 |
-
</p>
|
62 |
-
|
63 |
-
During the training stage, we generate sparse control signals through sparse motion sampling and then train different MOFA-Adapters to generate video via pre-trained SVD. During the inference stage, different MOFA-Adapters can be combined to jointly control the frozen SVD.
|
64 |
-
|
65 |
-
|
66 |
-
## π« Trajectory-based Image Animation
|
67 |
-
|
68 |
-
### Inference
|
69 |
-
|
70 |
-
Our inference demo is based on Gradio. Please refer to `./MOFA-Video-Traj/README.md` for instructions.
|
71 |
-
|
72 |
-
|
73 |
-
## Citation
|
74 |
-
```
|
75 |
-
@article{niu2024mofa,
|
76 |
-
title={MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model},
|
77 |
-
author={Niu, Muyao and Cun, Xiaodong and Wang, Xintao and Zhang, Yong and Shan, Ying and Zheng, Yinqiang},
|
78 |
-
journal={arXiv preprint arXiv:2405.20222},
|
79 |
-
year={2024}
|
80 |
-
}
|
81 |
```
|
82 |
|
83 |
-
##
|
84 |
-
|
|
|
85 |
|
|
|
|
1 |
+
## Environment Setup
|
2 |
|
3 |
+
`pip install -r requirements.txt`
|
4 |
|
5 |
+
## Download checkpoints
|
6 |
|
7 |
+
1. Download the pretrained checkpoints of [SVD_xt](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt-1-1) from huggingface to `./ckpts`.
|
8 |
|
9 |
+
2. Download the checkpoint of [MOFA-Adapter](https://huggingface.co/MyNiuuu/MOFA-Video-Traj) from huggingface to `./ckpts`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
+
3. Download the checkpoint of CMP from [here](https://huggingface.co/MyNiuuu/MOFA-Video-Traj/blob/main/models/cmp/experiments/semiauto_annot/resnet50_vip%2Bmpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar) and put it into `./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints`.
|
12 |
|
13 |
+
The final structure of checkpoints should be:
|
|
|
|
|
14 |
|
15 |
|
16 |
+
```text
|
17 |
+
./ckpts/
|
18 |
+
|-- controlnet
|
19 |
+
| |-- config.json
|
20 |
+
| `-- diffusion_pytorch_model.safetensors
|
21 |
+
|-- stable-video-diffusion-img2vid-xt-1-1
|
22 |
+
| |-- feature_extractor
|
23 |
+
| |-- ...
|
24 |
+
| |-- image_encoder
|
25 |
+
| |-- ...
|
26 |
+
| |-- scheduler
|
27 |
+
| |-- ...
|
28 |
+
| |-- unet
|
29 |
+
| |-- ...
|
30 |
+
| |-- unet_ch9
|
31 |
+
| |-- ...
|
32 |
+
| |-- vae
|
33 |
+
| |-- ...
|
34 |
+
| |-- svd_xt_1_1.safetensors
|
35 |
+
| `-- model_index.json
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
```
|
37 |
|
38 |
+
## Run Gradio Demo
|
39 |
+
|
40 |
+
`python run_gradio.py`
|
41 |
|
42 |
+
Please refer to the instructions on the gradio interface during the inference process.
|
MOFA-Video-Traj/run_gradio.py β app.py
RENAMED
@@ -28,6 +28,7 @@ from diffusers.utils.import_utils import is_xformers_available
|
|
28 |
|
29 |
from utils.flow_viz import flow_to_image
|
30 |
from utils.utils import split_filename, image2arr, image2pil, ensure_dirname
|
|
|
31 |
|
32 |
|
33 |
output_dir_video = "./outputs/videos"
|
@@ -85,7 +86,12 @@ def get_sparseflow_and_mask_forward(
|
|
85 |
|
86 |
return s_flow, mask
|
87 |
|
88 |
-
|
|
|
|
|
|
|
|
|
|
|
89 |
|
90 |
def init_models(pretrained_model_name_or_path, resume_from_checkpoint, weight_dtype, device='cuda', enable_xformers_memory_efficient_attention=False, allow_tf32=False):
|
91 |
|
@@ -216,11 +222,14 @@ class Drag:
|
|
216 |
def __init__(self, device, height, width, model_length):
|
217 |
self.device = device
|
218 |
|
|
|
219 |
svd_ckpt = "ckpts/stable-video-diffusion-img2vid-xt-1-1"
|
220 |
mofa_ckpt = "ckpts/controlnet"
|
221 |
|
222 |
self.device = 'cuda'
|
223 |
self.weight_dtype = torch.float16
|
|
|
|
|
224 |
|
225 |
self.pipeline, self.cmp = init_models(
|
226 |
svd_ckpt,
|
@@ -631,6 +640,10 @@ class Drag:
|
|
631 |
return hint_path, outputs_path, flows_path, outputs_mp4_path, flows_mp4_path
|
632 |
|
633 |
|
|
|
|
|
|
|
|
|
634 |
with gr.Blocks() as demo:
|
635 |
gr.Markdown("""<h1 align="center">MOFA-Video</h1><br>""")
|
636 |
|
@@ -828,4 +841,5 @@ with gr.Blocks() as demo:
|
|
828 |
|
829 |
run_button.click(DragNUWA_net.run, [first_frame_path, tracking_points, inference_batch_size, motion_brush_mask, motion_brush_viz, ctrl_scale], [hint_image, output_video, output_flow, output_video_mp4, output_flow_mp4])
|
830 |
|
831 |
-
demo.launch(
|
|
|
|
28 |
|
29 |
from utils.flow_viz import flow_to_image
|
30 |
from utils.utils import split_filename, image2arr, image2pil, ensure_dirname
|
31 |
+
from huggingface_hub import login, hf_hub_download, snapshot_download
|
32 |
|
33 |
|
34 |
output_dir_video = "./outputs/videos"
|
|
|
86 |
|
87 |
return s_flow, mask
|
88 |
|
89 |
+
def download_models(ckpts_path):
|
90 |
+
try:
|
91 |
+
snapshot_download(repo_id="vdo/stable-video-diffusion-img2vid-xt-1-1", local_dir=ckpts_path, cache_dir=ckpts_path)
|
92 |
+
snapshot_download(repo_id="MyNiuuu/MOFA-Video-Traj", local_dir=ckpts_path, cache_dir=ckpts_path, allow_patterns=["ckpts/controlnet/*"])
|
93 |
+
except (Exception, BaseException) as error:
|
94 |
+
print(error)
|
95 |
|
96 |
def init_models(pretrained_model_name_or_path, resume_from_checkpoint, weight_dtype, device='cuda', enable_xformers_memory_efficient_attention=False, allow_tf32=False):
|
97 |
|
|
|
222 |
def __init__(self, device, height, width, model_length):
|
223 |
self.device = device
|
224 |
|
225 |
+
ckpts_dir = "ckpts/"
|
226 |
svd_ckpt = "ckpts/stable-video-diffusion-img2vid-xt-1-1"
|
227 |
mofa_ckpt = "ckpts/controlnet"
|
228 |
|
229 |
self.device = 'cuda'
|
230 |
self.weight_dtype = torch.float16
|
231 |
+
|
232 |
+
download_models(ckpts_dir)
|
233 |
|
234 |
self.pipeline, self.cmp = init_models(
|
235 |
svd_ckpt,
|
|
|
640 |
return hint_path, outputs_path, flows_path, outputs_mp4_path, flows_mp4_path
|
641 |
|
642 |
|
643 |
+
# Download checkpoints to the right place
|
644 |
+
|
645 |
+
|
646 |
+
|
647 |
with gr.Blocks() as demo:
|
648 |
gr.Markdown("""<h1 align="center">MOFA-Video</h1><br>""")
|
649 |
|
|
|
841 |
|
842 |
run_button.click(DragNUWA_net.run, [first_frame_path, tracking_points, inference_batch_size, motion_brush_mask, motion_brush_viz, ctrl_scale], [hint_image, output_video, output_flow, output_video_mp4, output_flow_mp4])
|
843 |
|
844 |
+
demo.launch()
|
845 |
+
# demo.launch(server_name="0.0.0.0", debug=True, server_port=80)
|
assets/images/README.md
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
README
|
|
|
|
assets/images/project-mofa.png
DELETED
Binary file (652 kB)
|
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/config.yaml
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/resume.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/resume_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/train.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/train_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/validate.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/validate_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/config.yaml
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/resume.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/resume_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/train.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/train_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/validate.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/validate_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/config.yaml
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/resume.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/resume_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/train.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/train_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/validate.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/validate_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/config.yaml
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/resume.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/resume_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/train.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/train_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/validate.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/validate_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/config.yaml
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/resume.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/resume_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/train.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/train_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/validate.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/validate_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/config.yaml
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/resume.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/resume_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/train.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/train_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/validate.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/validate_slurm.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/config.yaml
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/resume.sh
RENAMED
File without changes
|
{MOFA-Video-Traj/models β models}/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/resume_slurm.sh
RENAMED
File without changes
|