3D Gaussian Splatting (3D-GS) has demonstrated exceptional capabilities in 3D scene reconstruction and novel view synthesis. However, its training heavily depends on high-quality, sharp images and accurate camera poses. Fulfilling these requirements can be challenging in non-ideal real-world scenarios, where motion-blurred images are commonly encountered in high-speed moving cameras or low-light environments that require long exposure times. To address these challenges, we introduce Event Stream Assisted Gaussian Splatting (EvaGaussians), a novel approach that integrates event streams captured by an event camera to assist in reconstructing high-quality 3D-GS from blurry images. Capitalizing on the high temporal resolution and dynamic range offered by the event camera, we leverage the event streams to explicitly model the formation process of motion-blurred images and guide the deblurring reconstruction of 3D-GS. By jointly optimizing the 3D-GS parameters and recovering camera motion trajectories during the exposure time, our method can robustly facilitate the acquisition of high-fidelity novel views with intricate texture details. We comprehensively evaluated our method and compared it with previous state-of-the-art deblurring rendering methods. Both qualitative and quantitative comparisons demonstrate that our method surpasses existing techniques in restoring fine details from blurry images and producing high-fidelity novel views.
翻译:三维高斯溅射(3D-GS)在三维场景重建和新视角合成方面展现出卓越能力。然而,其训练过程高度依赖于高质量、清晰的图像以及精确的相机位姿。在非理想的现实场景中,满足这些要求颇具挑战性,例如高速运动相机或需要长曝光时间的低光照环境下,常会出现运动模糊图像。为应对这些挑战,我们提出了事件流辅助高斯溅射(EvaGaussians),这是一种新颖的方法,通过整合事件相机捕获的事件流,辅助从模糊图像重建高质量的三维高斯溅射。该方法利用事件相机提供的高时间分辨率与宽动态范围特性,借助事件流显式建模运动模糊图像的形成过程,并指导三维高斯溅射的去模糊重建。通过联合优化三维高斯溅射参数并恢复曝光期间的相机运动轨迹,我们的方法能够鲁棒地实现具有精细纹理细节的高保真新视角获取。我们对所提方法进行了全面评估,并与先前最先进的去模糊渲染方法进行了比较。定性与定量结果均表明,本方法在从模糊图像恢复细节以及生成高保真新视角方面优于现有技术。