3D Gaussian Splatting (3DGS) demonstrates unparalleled superior performance in 3D scene reconstruction. However, 3DGS heavily relies on the sharp images. Fulfilling this requirement can be challenging in real-world scenarios especially when the camera moves fast, which severely limits the application of 3DGS. To address these challenges, we proposed Spike Gausian Splatting (SpikeGS), the first framework that integrates the spike streams into 3DGS pipeline to reconstruct 3D scenes via a fast-moving bio-inspired camera. With accumulation rasterization, interval supervision, and a specially designed pipeline, SpikeGS extracts detailed geometry and texture from high temporal resolution but texture lacking spike stream, reconstructs 3D scenes captured in 1 second. Extensive experiments on multiple synthetic and real-world datasets demonstrate the superiority of SpikeGS compared with existing spike-based and deblur 3D scene reconstruction methods. Codes and data will be released soon.
翻译:三维高斯泼溅(3DGS)在三维场景重建中展现出无与伦比的优越性能。然而,3DGS高度依赖清晰的图像输入。在真实场景中,尤其是在相机快速移动时,满足这一要求具有挑战性,这严重限制了3DGS的应用。为应对这些挑战,我们提出了Spike高斯泼溅(SpikeGS),这是首个将脉冲流集成到3DGS流程中、通过快速移动的仿生相机重建三维场景的框架。通过累积栅格化、区间监督以及专门设计的流程,SpikeGS能够从高时间分辨率但缺乏纹理的脉冲流中提取精细的几何与纹理信息,重建在1秒内捕获的三维场景。在多个合成与真实数据集上的大量实验表明,SpikeGS相较于现有的基于脉冲的去模糊三维场景重建方法具有显著优势。代码与数据即将发布。