Abstract
We explore a novel video creation experience, namely Video Creation by Demonstration. Given a demonstration video and a context image from a different scene, we generate a physically plausible video that continues naturally from the context image and carries out the action concepts from the demonstration. To enable this capability, we present delta-Diffusion, a self-supervised training approach that learns from unlabeled videos by conditional future frame prediction. Unlike most existing video generation controls that are based on explicit signals, we adopts the form of implicit latent control for maximal flexibility and expressiveness required by general videos. By leveraging a video foundation model with an appearance bottleneck design on top, we extract action latents from demonstration videos for conditioning the generation process with minimal appearance leakage. Empirically, delta-Diffusion outperforms related baselines in terms of both human preference and large-scale machine evaluations, and demonstrates potentials towards interactive world simulation. Sampled video generation results are available at https://delta-diffusion.github.io/.
Community
Introducing our latest work Video Creation by Demonstration, a novel video creation experience. Given a demonstration video and a context image from a different scene, we generate a physically plausible video that continues naturally from the context image and carries out the action concepts from the demonstration. Video Creation by Demonstration is one step towards interactive world simulation using video as "language". Project page: https://delta-diffusion.github.io/.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Track4Gen: Teaching Video Diffusion Models to Track Points Improves Video Generation (2024)
- Motion Control for Enhanced Complex Action Video Generation (2024)
- Movie Gen: A Cast of Media Foundation Models (2024)
- Efficient Long Video Tokenization via Coordinate-based Patch Reconstruction (2024)
- SynCamMaster: Synchronizing Multi-Camera Video Generation from Diverse Viewpoints (2024)
- AnimateAnything: Consistent and Controllable Animation for Video Generation (2024)
- REDUCIO! Generating 1024$\times$1024 Video within 16 Seconds using Extremely Compressed Motion Latents (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper