File size: 1,844 Bytes
4fb5cb5 8de3666 4fb5cb5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
pipeline_tag: image-to-video
---
# StreamingSVD
**[StreamingSVD: Consistent, Dynamic, and Extendable Image-Guided Long Video Generation]()**
</br>
Roberto Henschel,
Levon Khachatryan,
Daniil Hayrapetyan,
Hayk Poghosyan,
Vahram Tadevosyan,
Zhangyang Wang, Shant Navasardyan, Humphrey Shi
</br>
[Video](https://www.youtube.com/watch?v=md4lp42vOGU) | [Project page](https://streamingt2v.github.io) | [Code](https://github.com/Picsart-AI-Research/StreamingT2V)
<p align="center">
<img src="__assets__/teaser/Streaming_SVD_teaser.jpg" width="800px"/>
<br>
<h2>🔥 Meet StreamingSVD - A StreamingT2V Method</h2>
<em>
StreamingSVD is an advanced autoregressive technique for image-to-video generation, generating long hiqh-quality videos with rich motion dynamics, turning SVD into a long video generator. Our method ensures temporal consistency throughout the video, aligns closely to the input image, and maintains high frame-level image quality. Our demonstrations include successful examples of videos up to 200 frames, spanning 8 seconds, and can be extended for even longer durations.
The effectiveness of the underlying autoregressive approach is not limited to the specific base model used, indicating that improvements in base models can yield even higher-quality videos. StreamingSVD is part of the StreamingT2V family.
</em>
</p>
## BibTeX
If you use our work in your research, please cite our publications:
```
StreamingSVD paper comming soon.
@article{henschel2024streamingt2v,
title={StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text},
author={Henschel, Roberto and Khachatryan, Levon and Hayrapetyan, Daniil and Poghosyan, Hayk and Tadevosyan, Vahram and Wang, Zhangyang and Navasardyan, Shant and Shi, Humphrey},
journal={arXiv preprint arXiv:2403.14773},
year={2024}
}
``` |