VideoAnydoor: High-fidelity Video Object Insertion with Precise Motion Control
Abstract
Despite significant advancements in video generation, inserting a given object into videos remains a challenging task. The difficulty lies in preserving the appearance details of the reference object and accurately modeling coherent motions at the same time. In this paper, we propose VideoAnydoor, a zero-shot video object insertion framework with high-fidelity detail preservation and precise motion control. Starting from a text-to-video model, we utilize an ID extractor to inject the global identity and leverage a box sequence to control the overall motion. To preserve the detailed appearance and meanwhile support fine-grained motion control, we design a pixel warper. It takes the reference image with arbitrary key-points and the corresponding key-point trajectories as inputs. It warps the pixel details according to the trajectories and fuses the warped features with the diffusion U-Net, thus improving detail preservation and supporting users in manipulating the motion trajectories. In addition, we propose a training strategy involving both videos and static images with a reweight reconstruction loss to enhance insertion quality. VideoAnydoor demonstrates significant superiority over existing methods and naturally supports various downstream applications (e.g., talking head generation, video virtual try-on, multi-region editing) without task-specific fine-tuning.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MotionCharacter: Identity-Preserving and Motion Controllable Human Video Generation (2024)
- DIVE: Taming DINO for Subject-Driven Video Editing (2024)
- CPA: Camera-pose-awareness Diffusion Transformer for Video Generation (2024)
- UniReal: Universal Image Generation and Editing via Learning Real-world Dynamics (2024)
- VideoDirector: Precise Video Editing via Text-to-Video Models (2024)
- VIVID-10M: A Dataset and Baseline for Versatile and Interactive Video Local Editing (2024)
- I2VControl: Disentangled and Unified Video Motion Synthesis Control (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Does the pixel warper method struggle with transparent or reflective objects like glass and mirrors? These seem like they could challenge both the key-point tracking and appearance preservation. Great work!
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper