In this paper, we present a real-time photo-realistic SLAM method based on marrying Gaussian Splatting with LiDAR-Inertial-Camera SLAM. Most existing radiance-field-based SLAM systems mainly focus on bounded indoor environments, equipped with RGB-D or RGB sensors. However, they are prone to decline when expanding to unbounded scenes or encountering adverse conditions, such as violent motions and changing illumination. In contrast, oriented to general scenarios, our approach additionally tightly fuses LiDAR, IMU, and camera for robust pose estimation and photo-realistic online mapping. To compensate for regions unobserved by the LiDAR, we propose to integrate both the triangulated visual points from images and LiDAR points for initializing 3D Gaussians. In addition, the modeling of the sky and varying camera exposure have been realized for high-quality rendering. Notably, we implement our system purely with C++ and CUDA, and meticulously design a series of strategies to accelerate the online optimization of the Gaussian-based scene representation. Extensive experiments demonstrate that our method outperforms its counterparts while maintaining real-time capability. Impressively, regarding photo-realistic mapping, our method with our estimated poses even surpasses all the compared approaches that utilize privileged ground-truth poses for mapping. Our code will be released on project page https://xingxingzuo.github.io/gaussian_lic.
翻译:本文提出一种基于高斯点云与激光雷达-惯性-相机SLAM融合的实时逼真SLAM方法。现有大多数基于辐射场的SLAM系统主要面向配备RGB-D或RGB传感器的受限室内环境,但在扩展至无边界场景或遭遇剧烈运动、光照变化等不利条件时性能易出现退化。相比之下,面向通用场景的本文方法通过紧密融合激光雷达、IMU与相机数据,实现了鲁棒的位姿估计与逼真的在线建图。为弥补激光雷达未观测区域,我们提出融合图像三角化视觉点与激光雷达点云以初始化三维高斯分布。此外,系统实现了天空建模与相机曝光参数自适应,以提升渲染质量。值得注意的是,本系统完全基于C++与CUDA实现,并通过精心设计的多项策略加速了高斯场景表征的在线优化。大量实验表明,本方法在保持实时性的同时性能优于同类方案。值得关注的是,在逼真建图任务中,采用本方法估计位姿的建图效果甚至超越了所有使用真实位真值进行建图的对比方法。代码发布于项目页面 https://xingxingzuo.github.io/gaussian_lic。