LiDAR simulation plays a crucial role in closed-loop simulation for autonomous driving. Although recent advancements, such as the use of reconstructed mesh and Neural Radiance Fields (NeRF), have made progress in simulating the physical properties of LiDAR, these methods have struggled to achieve satisfactory frame rates and rendering quality. To address these limitations, we present LiDAR-GS, the first LiDAR Gaussian Splatting method, for real-time high-fidelity re-simulation of LiDAR sensor scans in public urban road scenes. The vanilla Gaussian Splatting, designed for camera models, cannot be directly applied to LiDAR re-simulation. To bridge the gap between passive camera and active LiDAR, our LiDAR-GS designs a differentiable laser beam splatting, grounded in the LiDAR range view model. This innovation allows for precise surface splatting by projecting lasers onto micro cross-sections, effectively eliminating artifacts associated with local affine approximations. Additionally, LiDAR-GS leverages Neural Gaussian Fields, which further integrate view-dependent clues, to represent key LiDAR properties that are influenced by the incident angle and external factors. Combining these practices with some essential adaptations, e.g., dynamic instances decomposition, our approach succeeds in simultaneously re-simulating depth, intensity, and ray-drop channels, achieving state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets. Our source code will be made publicly available.
翻译:激光雷达模拟在自动驾驶闭环仿真中起着至关重要的作用。尽管近期的进展,如使用重建网格和神经辐射场(NeRF),在模拟激光雷达物理特性方面取得了进步,但这些方法在帧率和渲染质量上均难以达到令人满意的水平。为应对这些局限,我们提出了LiDAR-GS,首个基于高斯泼溅的激光雷达方法,用于在公共城市道路场景中实现实时高保真的激光雷达传感器扫描重模拟。为相机模型设计的原始高斯泼溅方法无法直接应用于激光雷达重模拟。为弥合被动相机与主动激光雷达之间的差距,我们的LiDAR-GS设计了一种基于激光雷达距离视图模型的可微分激光束泼溅方法。该创新通过将激光投影到微截面上,实现了精确的表面泼溅,有效消除了与局部仿射近似相关的伪影。此外,LiDAR-GS利用神经高斯场进一步整合视角相关线索,以表征受入射角及外部因素影响的关键激光雷达特性。结合这些实践与一些必要的适配(如动态实例分解),我们的方法成功实现了深度、强度及射线丢失通道的同时重模拟,在公开的大规模场景数据集上,于渲染帧率和质量两方面均取得了最先进的结果。我们的源代码将公开提供。