We address the problem of reconstructing 3D surfaces from depth and surface normal maps acquired by a sensor system based on a single perspective camera. Depth and normal maps can be obtained through techniques such as structured-light scanning and photometric stereo, respectively. We propose a perspective-aware log-depth fusion approach that extends existing orthographic gradient-based depth-normals fusion methods by explicitly accounting for perspective projection, leading to metrically accurate 3D reconstructions. Additionally, the method handles missing depth measurements by leveraging available surface normal information to inpaint gaps. Experiments on the DiLiGenT-MV data set demonstrate the effectiveness of our approach and highlight the importance of perspective-aware depth-normals fusion.
翻译:本文研究基于单视角相机传感器系统获取的深度图与表面法向图进行三维表面重建的问题。深度图与法向图可分别通过结构光扫描和光度立体等技术获得。我们提出一种视角感知的对数深度融合方法,该方法通过显式考虑透视投影,扩展了现有的基于正交梯度深度-法向融合方法,从而获得度量精确的三维重建结果。此外,该方法通过利用可用的表面法向信息修复缺失区域,能够处理深度测量值缺失的情况。在DiLiGenT-MV数据集上的实验验证了本方法的有效性,并凸显了视角感知深度-法向融合的重要性。