Update README.md
Browse files
README.md
CHANGED
|
@@ -5,7 +5,7 @@ pipeline_tag: image-to-video
|
|
| 5 |
|
| 6 |
AnimateLCM-I2V is a latent image-to-video consistency model finetuned with [AnimateLCM](https://huggingface.co/wangfuyun/AnimateLCM) following the strategy proposed in [AnimateLCM-paper](https://arxiv.org/abs/2402.00769) without requiring teacher models.
|
| 7 |
|
| 8 |
-
[AnimateLCM:
|
| 9 |
|
| 10 |
## Example-Video
|
| 11 |
|
|
|
|
| 5 |
|
| 6 |
AnimateLCM-I2V is a latent image-to-video consistency model finetuned with [AnimateLCM](https://huggingface.co/wangfuyun/AnimateLCM) following the strategy proposed in [AnimateLCM-paper](https://arxiv.org/abs/2402.00769) without requiring teacher models.
|
| 7 |
|
| 8 |
+
[AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data](https://arxiv.org/abs/2402.00769) by Fu-Yun Wang et al.
|
| 9 |
|
| 10 |
## Example-Video
|
| 11 |
|