High-quality image acquisition in real-world environments remains challenging due to complex illumination variations and inherent limitations of camera imaging pipelines. These issues are exacerbated in multi-view capture, where differences in lighting, sensor responses, and image signal processor (ISP) configurations introduce photometric and chromatic inconsistencies that violate the assumptions of photometric consistency underlying modern 3D novel view synthesis (NVS) methods, including Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS), leading to degraded reconstruction and rendering quality. We propose Luminance-GS++, a 3DGS-based framework for robust NVS under diverse illumination conditions. Our method combines a globally view-adaptive lightness adjustment with a local pixel-wise residual refinement for precise color correction. We further design unsupervised objectives that jointly enforce lightness correction and multi-view geometric and photometric consistency. Extensive experiments demonstrate state-of-the-art performance across challenging scenarios, including low-light, overexposure, and complex luminance and chromatic variations. Unlike prior approaches that modify the underlying representation, our method preserves the explicit 3DGS formulation, improving reconstruction fidelity while maintaining real-time rendering efficiency.
翻译:在现实环境中获取高质量图像仍然具有挑战性,这主要源于复杂的照明变化以及相机成像流程的固有局限。这些问题在多视角采集场景中尤为突出,其中光照条件、传感器响应以及图像信号处理器(ISP)配置的差异会引入光度与色彩不一致性,从而违背了现代三维新视角合成方法(包括神经辐射场与三维高斯溅射)所依赖的光度一致性假设,导致重建与渲染质量下降。我们提出Luminance-GS++,一个基于三维高斯溅射的鲁棒新视角合成框架,适用于多样化光照条件。该方法结合了全局视图自适应亮度调整与局部像素级残差细化,以实现精确的色彩校正。我们进一步设计了无监督优化目标,共同约束亮度校正与多视角几何及光度一致性。大量实验表明,本方法在低光照、过曝光以及复杂亮度与色彩变化等挑战性场景中均达到了最先进的性能。与先前需要修改底层表示的方法不同,我们的方法保持了显式的三维高斯溅射表达,在提升重建保真度的同时维持了实时渲染效率。