We introduce OceanSplat, a novel 3D Gaussian Splatting-based approach for high-fidelity underwater scene reconstruction. To overcome multi-view inconsistencies caused by scattering media, we design a trinocular setup for each camera pose by rendering from horizontally and vertically translated virtual viewpoints, enforcing view consistency to facilitate spatial optimization of 3D Gaussians. Furthermore, we derive synthetic epipolar depth priors from the virtual viewpoints, which serve as self-supervised depth regularizers to compensate for the limited geometric cues in degraded underwater scenes. We also propose a depth-aware alpha adjustment that modulates the opacity of 3D Gaussians during early training based on their depth along the viewing direction, deterring the formation of medium-induced primitives. Our approach promotes the disentanglement of 3D Gaussians from the scattering medium through effective geometric constraints, enabling accurate representation of scene structure and significantly reducing floating artifacts. Experiments on real-world underwater and simulated scenes demonstrate that OceanSplat substantially outperforms existing methods for both scene reconstruction and restoration in scattering media.
翻译:我们提出了OceanSplat,一种基于3D高斯溅射的新型高保真水下场景重建方法。为了克服散射介质导致的多视图不一致性,我们为每个相机位姿设计了一种三目设置,通过从水平和垂直平移的虚拟视点进行渲染,强制视图一致性以促进3D高斯空间优化。此外,我们从虚拟视点推导出合成极线深度先验,作为自监督深度正则化器,以补偿退化水下场景中有限的几何线索。我们还提出了一种深度感知的alpha调整方法,该方法在早期训练阶段根据3D高斯沿视线方向的深度来调制其不透明度,从而抑制由介质引起的基元形成。我们的方法通过有效的几何约束促进了3D高斯与散射介质的解耦,实现了场景结构的精确表示,并显著减少了漂浮伪影。在真实世界水下场景和模拟场景上的实验表明,OceanSplat在散射介质中的场景重建和恢复方面均显著优于现有方法。