Update README.md
Browse files
README.md
CHANGED
@@ -2,11 +2,15 @@
|
|
2 |
pipeline_tag: image-to-video
|
3 |
---
|
4 |
|
|
|
5 |
Consistency Distilled [Stable Video Diffusion Image2Video-XT (SVD-xt)](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt) following the strategy proposed in [AnimateLCM-paper](https://arxiv.org/abs/2402.00769).
|
|
|
6 |
|
|
|
|
|
7 |
|
8 |
|
9 |
-
|
10 |
|
11 |
|
12 |
|
|
|
2 |
pipeline_tag: image-to-video
|
3 |
---
|
4 |
|
5 |
+
## Introduction
|
6 |
Consistency Distilled [Stable Video Diffusion Image2Video-XT (SVD-xt)](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt) following the strategy proposed in [AnimateLCM-paper](https://arxiv.org/abs/2402.00769).
|
7 |
+
AnimateLCM-SVD-xt can generate good quality image-conditioned videos with 25 frames in 2~8 steps with 576x1024 resolutions.
|
8 |
|
9 |
+
## Computation comparsion
|
10 |
+
AnimateLCM-SVD-xt can generally produces demos with good quality in 4 steps without requiring the classifier-free guidance, and therefore can save 25 x 2 / 4 = 12.5 times compuation resources compared with normal SVD models.
|
11 |
|
12 |
|
13 |
+
## Demos
|
14 |
|
15 |
|
16 |
|