Material Anything: Generating Materials for Any 3D Object via Diffusion
Abstract
We present Material Anything, a fully-automated, unified diffusion framework designed to generate physically-based materials for 3D objects. Unlike existing methods that rely on complex pipelines or case-specific optimizations, Material Anything offers a robust, end-to-end solution adaptable to objects under diverse lighting conditions. Our approach leverages a pre-trained image diffusion model, enhanced with a triple-head architecture and rendering loss to improve stability and material quality. Additionally, we introduce confidence masks as a dynamic switcher within the diffusion model, enabling it to effectively handle both textured and texture-less objects across varying lighting conditions. By employing a progressive material generation strategy guided by these confidence masks, along with a UV-space material refiner, our method ensures consistent, UV-ready material outputs. Extensive experiments demonstrate our approach outperforms existing methods across a wide range of object categories and lighting conditions.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SSEditor: Controllable Mask-to-Scene Generation with Diffusion Model (2024)
- MVLight: Relightable Text-to-3D Generation via Light-conditioned Multi-View Diffusion (2024)
- OminiControl: Minimal and Universal Control for Diffusion Transformer (2024)
- TEXGen: a Generative Diffusion Model for Mesh Textures (2024)
- ARM: Appearance Reconstruction Model for Relightable 3D Generation (2024)
- Any-to-3D Generation via Hybrid Diffusion Supervision (2024)
- GANESH: Generalizable NeRF for Lensless Imaging (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper