Neural radiance fields (NeRFs) are a deep learning technique that can generate novel views of 3D scenes using sparse 2D images from different viewing directions and camera poses. As an extension of conventional NeRFs in underwater environment, where light can get absorbed and scattered by water, SeaThru-NeRF was proposed to separate the clean appearance and geometric structure of underwater scene from the effects of the scattering medium. Since the quality of the appearance and structure of underwater scenes is crucial for downstream tasks such as underwater infrastructure inspection, the reliability of the 3D reconstruction model should be considered and evaluated. Nonetheless, owing to the lack of ability to quantify uncertainty in 3D reconstruction of underwater scenes under natural ambient illumination, the practical deployment of NeRFs in unmanned autonomous underwater navigation is limited. To address this issue, we introduce a spatial perturbation field D_omega based on Bayes' rays in SeaThru-NeRF and perform Laplace approximation to obtain a Gaussian distribution N(0,Sigma) of the parameters omega, where the diagonal elements of Sigma correspond to the uncertainty at each spatial location. We also employ a simple thresholding method to remove artifacts from the rendered results of underwater scenes. Numerical experiments are provided to demonstrate the effectiveness of this approach.
翻译:神经辐射场(NeRFs)是一种深度学习技术,能够利用来自不同观测方向和相机姿态的稀疏二维图像生成三维场景的新视角。作为传统NeRF在水下环境中的扩展,SeaThru-NeRF被提出用于从散射介质的影响中分离出水下场景的清晰外观与几何结构。由于水下场景的外观与结构质量对于水下基础设施检测等下游任务至关重要,三维重建模型的可靠性需予以考量与评估。然而,由于现有方法在自然环境光照下缺乏量化水下场景三维重建不确定性的能力,NeRF在无人自主水下导航中的实际应用受到限制。为解决此问题,我们在SeaThru-NeRF中引入基于贝叶斯光线的空间扰动场D_omega,并执行拉普拉斯近似以获得参数omega的高斯分布N(0,Sigma),其中Sigma的对角线元素对应各空间位置的不确定性。我们还采用简单的阈值处理方法以消除水下场景渲染结果中的伪影。数值实验验证了该方法的有效性。