Papers
arxiv:2412.14173

AniDoc: Animation Creation Made Easier

Published on Dec 18
· Submitted by Yhmeng1106 on Dec 19
#2 Paper of the day
Authors:
,
,
,
,
,
,
,
,

Abstract

The production of 2D animation follows an industry-standard workflow, encompassing four essential stages: character design, keyframe animation, in-betweening, and coloring. Our research focuses on reducing the labor costs in the above process by harnessing the potential of increasingly powerful generative AI. Using video diffusion models as the foundation, AniDoc emerges as a video line art colorization tool, which automatically converts sketch sequences into colored animations following the reference character specification. Our model exploits correspondence matching as an explicit guidance, yielding strong robustness to the variations (e.g., posture) between the reference character and each line art frame. In addition, our model could even automate the in-betweening process, such that users can easily create a temporally consistent animation by simply providing a character image as well as the start and end sketches. Our code is available at: https://yihao-meng.github.io/AniDoc_demo.

Community

Paper submitter

We are excited to share our paper that focus on automate the animation creation workflow using video diffusion model. Strongly recommend to take a look at our demo video~

Paper link:https://arxiv.org/abs/2412.14173
Demo link: https://yihao-meng.github.io/AniDoc_demo
Code available at: https://github.com/yihao-meng/AniDoc

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2412.14173 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2412.14173 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2412.14173 in a Space README.md to link it from this page.

Collections including this paper 1