This paper targets the challenge of real-time LiDAR re-simulation in dynamic driving scenarios. Recent approaches utilize neural radiance fields combined with the physical modeling of LiDAR sensors to achieve high-fidelity re-simulation results. Unfortunately, these methods face limitations due to high computational demands in large-scale scenes and cannot perform real-time LiDAR rendering. To overcome these constraints, we propose LiDAR-RT, a novel framework that supports real-time, physically accurate LiDAR re-simulation for driving scenes. Our primary contribution is the development of an efficient and effective rendering pipeline, which integrates Gaussian primitives and hardware-accelerated ray tracing technology. Specifically, we model the physical properties of LiDAR sensors using Gaussian primitives with learnable parameters and incorporate scene graphs to handle scene dynamics. Building upon this scene representation, our framework first constructs a bounding volume hierarchy (BVH), then casts rays for each pixel and generates novel LiDAR views through a differentiable rendering algorithm. Importantly, our framework supports realistic rendering with flexible scene editing operations and various sensor configurations. Extensive experiments across multiple public benchmarks demonstrate that our method outperforms state-of-the-art methods in terms of rendering quality and efficiency. Our project page is at https://zju3dv.github.io/lidar-rt.
翻译:本文旨在解决动态驾驶场景中实时激光雷达重模拟的挑战。现有方法通常结合神经辐射场与激光雷达传感器的物理建模,以实现高保真度的重模拟结果。然而,这些方法在大规模场景中面临计算需求高的问题,无法实现实时的激光雷达渲染。为克服这些限制,我们提出了LiDAR-RT,一个支持驾驶场景实时、物理精确的激光雷达重模拟的新型框架。我们的主要贡献是开发了一套高效且有效的渲染流水线,该流水线集成了高斯基元与硬件加速的光线追踪技术。具体而言,我们使用具有可学习参数的高斯基元对激光雷达传感器的物理特性进行建模,并引入场景图以处理场景动态性。基于此场景表示,我们的框架首先构建层次包围盒(BVH),随后为每个像素投射光线,并通过可微分渲染算法生成新的激光雷达视角。重要的是,我们的框架支持灵活的场景编辑操作与多种传感器配置下的真实感渲染。在多个公开基准上的大量实验表明,我们的方法在渲染质量与效率方面均优于现有先进方法。项目页面位于 https://zju3dv.github.io/lidar-rt。