State-of-the-art neural implicit surface representations have achieved impressive results in indoor scene reconstruction by incorporating monocular geometric priors as additional supervision. However, we have observed that multi-view inconsistency between such priors poses a challenge for high-quality reconstructions. In response, we present NC-SDF, a neural signed distance field (SDF) 3D reconstruction framework with view-dependent normal compensation (NC). Specifically, we integrate view-dependent biases in monocular normal priors into the neural implicit representation of the scene. By adaptively learning and correcting the biases, our NC-SDF effectively mitigates the adverse impact of inconsistent supervision, enhancing both the global consistency and local details in the reconstructions. To further refine the details, we introduce an informative pixel sampling strategy to pay more attention to intricate geometry with higher information content. Additionally, we design a hybrid geometry modeling approach to improve the neural implicit representation. Experiments on synthetic and real-world datasets demonstrate that NC-SDF outperforms existing approaches in terms of reconstruction quality.
翻译:最先进的神经隐式表面表示方法通过引入单目几何先验作为额外监督,在室内场景重建中取得了令人瞩目的成果。然而,我们观察到此类先验之间的多视角不一致性对高质量重建构成了挑战。为此,我们提出了NC-SDF——一种结合视角相关法线补偿(NC)的神经有符号距离场(SDF)三维重建框架。具体而言,我们将单目法线先验中的视角相关偏差融入场景的神经隐式表示中。通过自适应学习与校正偏差,我们的NC-SDF有效缓解了不一致监督带来的负面影响,增强了重建结果的全局一致性与局部细节。为进一步细化细节,我们引入了一种信息驱动的像素采样策略,以更关注信息量更高的复杂几何结构。此外,我们设计了一种混合几何建模方法以改进神经隐式表示。在合成数据集与真实数据集上的实验表明,NC-SDF在重建质量上优于现有方法。