Low-light 3D reconstruction from sparse views remains challenging due to exposure imbalance and degraded color fidelity. While existing methods struggle with view inconsistency and require per-scene training, we propose SplatBright, which is, to our knowledge, the first generalizable 3D Gaussian framework for joint low-light enhancement and reconstruction from sparse sRGB inputs. Our key idea is to integrate physically guided illumination modeling with geometry-appearance decoupling for consistent low-light reconstruction. Specifically, we adopt a dual-branch predictor that provides stable geometric initialization of 3D Gaussian parameters. On the appearance side, illumination consistency leverages frequency priors to enable controllable and cross-view coherent lighting, while an appearance refinement module further separates illumination, material, and view-dependent cues to recover fine texture. To tackle the lack of large-scale geometrically consistent paired data, we synthesize dark views via a physics-based camera model for training. Extensive experiments on public and self-collected datasets demonstrate that SplatBright achieves superior novel view synthesis, cross-view consistency, and better generalization to unseen low-light scenes compared with both 2D and 3D methods.
翻译:稀疏视角下的低光三维重建因曝光不平衡与色彩保真度下降而面临挑战。现有方法常受视角不一致性困扰且需逐场景训练,为此我们提出SplatBright——据我们所知,这是首个适用于稀疏sRGB输入的联合低光增强与重建的通用三维高斯框架。其核心思想是将物理引导的照明建模与几何-外观解耦相结合,以实现一致的低光重建。具体而言,我们采用双分支预测器为三维高斯参数提供稳定的几何初始化。在外观层面,照明一致性模块利用频率先验实现可控且跨视角连贯的照明,而外观细化模块进一步分离光照、材质与视角相关线索以恢复精细纹理。针对缺乏大规模几何一致配对数据的问题,我们通过基于物理的相机模型合成暗光视图进行训练。在公开数据集与自采集数据集上的大量实验表明,相较于二维与三维方法,SplatBright在新视角合成、跨视角一致性以及对未见低光场景的泛化能力方面均表现出优越性能。