Current novel view synthesis tasks primarily rely on high-quality and clear images. However, in foggy scenes, scattering and attenuation can significantly degrade the reconstruction and rendering quality. Although NeRF-based dehazing reconstruction algorithms have been developed, their use of deep fully connected neural networks and per-ray sampling strategies leads to high computational costs. Moreover, NeRF's implicit representation struggles to recover fine details from hazy scenes. In contrast, recent advancements in 3D Gaussian Splatting achieve high-quality 3D scene reconstruction by explicitly modeling point clouds into 3D Gaussians. In this paper, we propose leveraging the explicit Gaussian representation to explain the foggy image formation process through a physically accurate forward rendering process. We introduce DehazeGS, a method capable of decomposing and rendering a fog-free background from participating media using only muti-view foggy images as input. We model the transmission within each Gaussian distribution to simulate the formation of fog. During this process, we jointly learn the atmospheric light and scattering coefficient while optimizing the Gaussian representation of the hazy scene. In the inference stage, we eliminate the effects of scattering and attenuation on the Gaussians and directly project them onto a 2D plane to obtain a clear view. Experiments on both synthetic and real-world foggy datasets demonstrate that DehazeGS achieves state-of-the-art performance in terms of both rendering quality and computational efficiency.
翻译:当前的新视角合成任务主要依赖高质量清晰图像。然而在雾霾场景中,散射与衰减效应会显著降低重建与渲染质量。尽管已有基于神经辐射场(NeRF)的去雾重建算法,但其使用的深度全连接神经网络与逐射线采样策略导致计算成本高昂。此外,NeRF的隐式表示难以从雾霾场景中恢复精细细节。相比之下,近期3D高斯溅射技术通过将点云显式建模为3D高斯分布,实现了高质量三维场景重建。本文提出利用显式高斯表示,通过物理精确的前向渲染过程解释雾霾图像的形成机制。我们引入DehazeGS方法,仅需多视角雾霾图像作为输入,即可从参与介质中分解并渲染无雾背景。我们通过在每个高斯分布内建模透射率来模拟雾的形成过程,在此过程中联合学习大气光与散射系数,同时优化雾霾场景的高斯表示。在推理阶段,我们消除散射与衰减对高斯分布的影响,并将其直接投影至二维平面以获得清晰视图。在合成与真实雾霾数据集上的实验表明,DehazeGS在渲染质量与计算效率方面均达到最先进性能。