We present Material Anything, a fully-automated, unified diffusion framework designed to generate physically-based materials for 3D objects. Unlike existing methods that rely on complex pipelines or case-specific optimizations, Material Anything offers a robust, end-to-end solution adaptable to objects under diverse lighting conditions. Our approach leverages a pre-trained image diffusion model, enhanced with a triple-head architecture and rendering loss to improve stability and material quality. Additionally, we introduce confidence masks as a dynamic switcher within the diffusion model, enabling it to effectively handle both textured and texture-less objects across varying lighting conditions. By employing a progressive material generation strategy guided by these confidence masks, along with a UV-space material refiner, our method ensures consistent, UV-ready material outputs. Extensive experiments demonstrate our approach outperforms existing methods across a wide range of object categories and lighting conditions.
翻译:本文提出Material Anything,一种全自动、统一的扩散框架,旨在为三维物体生成基于物理的材质。与现有依赖复杂流程或特定案例优化的方法不同,Material Anything提供了一种鲁棒的端到端解决方案,可适应不同光照条件下的物体。我们的方法利用预训练图像扩散模型,通过三头架构和渲染损失增强以提升稳定性和材质质量。此外,我们引入置信度掩码作为扩散模型内部的动态切换器,使其能够有效处理不同光照条件下的有纹理和无纹理物体。通过采用由置信度掩码引导的渐进式材质生成策略,并结合UV空间材质优化器,本方法确保了稳定且可直接用于UV贴图的材质输出。大量实验表明,我们的方法在广泛的物体类别和光照条件下均优于现有方法。