StrandHead: Text to Strand-Disentangled 3D Head Avatars Using Hair Geometric Priors
Abstract
While haircut indicates distinct personality, existing avatar generation methods fail to model practical hair due to the general or entangled representation. We propose StrandHead, a novel text to 3D head avatar generation method capable of generating disentangled 3D hair with strand representation. Without using 3D data for supervision, we demonstrate that realistic hair strands can be generated from prompts by distilling 2D generative diffusion models. To this end, we propose a series of reliable priors on shape initialization, geometric primitives, and statistical haircut features, leading to a stable optimization and text-aligned performance. Extensive experiments show that StrandHead achieves the state-of-the-art reality and diversity of generated 3D head and hair. The generated 3D hair can also be easily implemented in the Unreal Engine for physical simulation and other applications. The code will be available at https://xiaokunsun.github.io/StrandHead.github.io.
Community
TL;DR: Given a prompt, our method generates a high-quality 3D head avatar with strand-level textured hair, enabling strand-based rendering and simulation
Project Page: https://xiaokunsun.github.io/StrandHead.github.io
Code: https://github.com/XiaokunSun/StrandHead
3D Results: https://drive.google.com/drive/folders/1Ve2vVVilzI-2TYNB9wQrLgG53L2PjFBM?usp=sharing
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SimAvatar: Simulation-Ready Avatars with Layered Hair and Clothing (2024)
- DreamPolish: Domain Score Distillation With Progressive Geometry Generation (2024)
- DRiVE: Diffusion-based Rigging Empowers Generation of Versatile and Expressive Characters (2024)
- FATE: Full-head Gaussian Avatar with Textural Editing from Monocular Video (2024)
- Enhanced 3D Generation by 2D Editing (2024)
- CompGS: Unleashing 2D Compositionality for Compositional Text-to-3D via Dynamically Optimizing 3D Gaussians (2024)
- DynamicAvatars: Accurate Dynamic Facial Avatars Reconstruction and Precise Editing with Diffusion Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper