Underwater 3D reconstruction and appearance restoration are hindered by the complex optical properties of water, such as wavelength-dependent attenuation and scattering. Existing Neural Radiance Fields (NeRF)-based methods struggle with slow rendering speeds and suboptimal color restoration, while 3D Gaussian Splatting (3DGS) inherently lacks the capability to model complex volumetric scattering effects. To address these issues, we introduce WaterClear-GS, the first pure 3DGS-based framework that explicitly integrates underwater optical properties of local attenuation and scattering into Gaussian primitives, eliminating the need for an auxiliary medium network. Our method employs a dual-branch optimization strategy to ensure underwater photometric consistency while naturally recovering water-free appearances. This strategy is enhanced by depth-guided geometry regularization and perception-driven image loss, together with exposure constraints, spatially-adaptive regularization, and physically guided spectral regularization, which collectively enforce local 3D coherence and maintain natural visual perception. Experiments on standard benchmarks and our newly collected dataset demonstrate that WaterClear-GS achieves outstanding performance on both novel view synthesis (NVS) and underwater image restoration (UIR) tasks, while maintaining real-time rendering. The code will be available at https://buaaxrzhang.github.io/WaterClear-GS/.
翻译:水下三维重建与外观复原受到水体复杂光学特性(如波长依赖性衰减与散射)的制约。现有基于神经辐射场(NeRF)的方法面临渲染速度慢、色彩复原效果欠佳的问题,而三维高斯溅射(3DGS)本身缺乏对复杂体积散射效应的建模能力。为解决这些问题,我们提出了WaterClear-GS,这是首个基于纯3DGS的框架,其将水下局部衰减与散射的光学特性显式地集成到高斯基元中,无需借助辅助介质网络。我们的方法采用双分支优化策略,在确保水下光度一致性的同时,自然地复原无水条件下的外观。该策略通过深度引导的几何正则化、感知驱动的图像损失,并结合曝光约束、空间自适应正则化以及物理引导的光谱正则化进行增强,共同保证了局部三维一致性并维持了自然的视觉感知。在标准基准数据集及我们新采集的数据集上的实验表明,WaterClear-GS在新视角合成(NVS)与水下图像复原(UIR)任务上均取得了优异的性能,同时保持了实时渲染能力。代码将在 https://buaaxrzhang.github.io/WaterClear-GS/ 发布。