We introduce SeaSplat, a method to enable real-time rendering of underwater scenes leveraging recent advances in 3D radiance fields. Underwater scenes are challenging visual environments, as rendering through a medium such as water introduces both range and color dependent effects on image capture. We constrain 3D Gaussian Splatting (3DGS), a recent advance in radiance fields enabling rapid training and real-time rendering of full 3D scenes, with a physically grounded underwater image formation model. Applying SeaSplat to the real-world scenes from SeaThru-NeRF dataset, a scene collected by an underwater vehicle in the US Virgin Islands, and simulation-degraded real-world scenes, not only do we see increased quantitative performance on rendering novel viewpoints from the scene with the medium present, but are also able to recover the underlying true color of the scene and restore renders to be without the presence of the intervening medium. We show that the underwater image formation helps learn scene structure, with better depth maps, as well as show that our improvements maintain the significant computational improvements afforded by leveraging a 3D Gaussian representation.
翻译:本文提出SeaSplat方法,该技术利用三维辐射场的最新进展,实现了水下场景的实时渲染。水下环境是极具挑战性的视觉场景,因为水体等介质会引入与距离和颜色相关的成像效应。我们通过物理基础的水下成像模型约束了3D高斯泼溅(3DGS)——这项辐射场领域的最新进展能够实现全三维场景的快速训练与实时渲染。将SeaSplat应用于SeaThru-NeRF数据集中的真实水下场景(由水下航行器在美属维尔京群岛采集)、以及经模拟退化的真实场景时,该方法不仅在有介质存在的场景新视角渲染任务中展现出更高的量化性能,还能恢复场景的真实色彩,实现无介质干扰的渲染重建。实验表明,水下成像模型有助于提升场景结构学习能力,获得更精确的深度图,同时我们的改进方案保持了采用3D高斯表征所带来的显著计算效率优势。