Implicit neural representations have emerged as a powerful tool in learning 3D geometry, offering unparalleled advantages over conventional representations like mesh-based methods. A common type of INR implicitly encodes a shape's boundary as the zero-level set of the learned continuous function and learns a mapping from a low-dimensional latent space to the space of all possible shapes represented by its signed distance function. However, most INRs struggle to retain high-frequency details, which are crucial for accurate geometric depiction, and they are computationally expensive. To address these limitations, we present a novel approach that both reduces computational expenses and enhances the capture of fine details. Our method integrates periodic activation functions, positional encodings, and normals into the neural network architecture. This integration significantly enhances the model's ability to learn the entire space of 3D shapes while preserving intricate details and sharp features, areas where conventional representations often fall short.
翻译:隐式神经表示已成为学习3D几何的强大工具,相较于基于网格的传统表示方法展现出无可比拟的优势。一种常见的隐式神经表示将形状边界隐式编码为学习所得连续函数的零水平集,并学习从低维潜在空间到由其有符号距离函数表示的所有可能形状空间的映射。然而,大多数隐式神经表示难以保留对精确几何描述至关重要的高频细节,且计算成本高昂。为应对这些局限性,我们提出一种既能降低计算开销又能提升细节捕捉能力的新方法。该方法将周期性激活函数、位置编码与法向量整合到神经网络架构中。这种集成显著增强了模型学习完整3D形状空间的能力,同时保留了传统表示方法往往难以处理的复杂细节与锐利特征。