High-quality material synthesis is essential for replicating complex surface properties to create realistic digital scenes. However, existing methods often suffer from inefficiencies in time and memory, require domain expertise, or demand extensive training data, with high-dimensional material data further constraining performance. Additionally, most approaches lack multi-modal guidance capabilities and standardized evaluation metrics, limiting control and comparability in synthesis tasks. To address these limitations, we propose NeuMaDiff, a novel neural material synthesis framework utilizing hyperdiffusion. Our method employs neural fields as a low-dimensional representation and incorporates a multi-modal conditional hyperdiffusion model to learn the distribution over material weights. This enables flexible guidance through inputs such as material type, text descriptions, or reference images, providing greater control over synthesis. To support future research, we contribute two new material datasets and introduce two BRDF distributional metrics for more rigorous evaluation. We demonstrate the effectiveness of NeuMaDiff through extensive experiments, including a novel statistics-based constrained synthesis approach, which enables the generation of materials of desired categories.
翻译:高质量材质合成对于复现复杂表面属性以创建逼真数字场景至关重要。然而,现有方法常受限于时间和内存效率低下、需要领域专业知识或依赖大量训练数据,而高维材质数据进一步制约了性能。此外,多数方法缺乏多模态引导能力与标准化评估指标,限制了合成任务中的可控性与可比较性。为应对这些局限,我们提出NeuMaDiff,一种利用超扩散的新型神经材质合成框架。该方法采用神经场作为低维表示,并引入多模态条件超扩散模型以学习材质权重的分布。这使得通过材质类型、文本描述或参考图像等输入进行灵活引导成为可能,从而提供更强的合成控制力。为支持未来研究,我们贡献了两个新的材质数据集,并引入了两个BRDF分布度量以实现更严谨的评估。我们通过大量实验验证了NeuMaDiff的有效性,其中包括一种新颖的基于统计的约束合成方法,该方法能够生成所需类别的材质。