File size: 1,878 Bytes
51aba6b
 
 
 
0722208
ffadf6c
 
f76e219
 
0722208
 
 
 
91037c2
51aba6b
91037c2
51aba6b
91037c2
 
51aba6b
 
91037c2
51aba6b
0722208
 
 
 
 
 
 
 
f76e219
 
 
 
51aba6b
8969e35
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
pipeline_tag: image-to-video
---

<p align="center">
  <img src="./demos/demo-01.gif" width="70%" /> 
  <img src="./demos/demo-02.gif" width="70%" />
  <img src="./demos/demo-03.gif" width="70%" />

</p>
<p align="center">Samples generated by AnimateLCM-SVD-xt</p>


## Introduction
Consistency Distilled [Stable Video Diffusion Image2Video-XT (SVD-xt)](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt) following the strategy proposed in [AnimateLCM-paper](https://arxiv.org/abs/2402.00769).
AnimateLCM-SVD-xt can generate good quality image-conditioned videos with 25 frames in 2~8 steps with 576x1024 resolutions.

## Computation comparsion
AnimateLCM-SVD-xt can generally produces demos with good quality in 4 steps without requiring the classifier-free guidance, and therefore can save 25 x 2 / 4 = 12.5 times compuation resources compared with normal SVD models.


## Demos

|     |     |     |
| :---: | :---: | :---: |
| ![Alt text 1](./demos/01-2.gif) | ![Alt text 2](./demos/01-4.gif) | ![Alt text 3](./demos/01-8.gif) |
| 2 steps, cfg=1 | 4 steps, cfg=1 | 8 steps, cfg=1 |
| ![Alt text 1](./demos/02-2.gif) | ![Alt text 2](./demos/02-4.gif) | ![Alt text 3](./demos/02-8.gif) |
| 2 steps, cfg=1 | 4 steps, cfg=1 | 8 steps, cfg=1 |
| ![Alt text 1](./demos/03-2.gif) | ![Alt text 2](./demos/03-4.gif) | ![Alt text 3](./demos/03-8.gif) |
| 2 steps, cfg=1 | 4 steps, cfg=1 | 8 steps, cfg=1 |
| ![Alt text 1](./demos/04-2.gif) | ![Alt text 2](./demos/04-4.gif) | ![Alt text 3](./demos/04-8.gif) |
| 2 steps, cfg=1 | 4 steps, cfg=1 | 8 steps, cfg=1 |
| ![Alt text 1](./demos/05-2.gif) | ![Alt text 2](./demos/05-4.gif) | ![Alt text 3](./demos/05-8.gif) |
| 2 steps, cfg=1 | 4 steps, cfg=1 | 8 steps, cfg=1 |

Please contact Fu-Yun Wang (fywang@link.cuhk.edu.hk) for the inference code and the scheduler design. I might respond a bit later. Thank you!