Ensuring the safety of autonomous robots, such as self-driving vehicles, requires extensive testing across diverse driving scenarios. Simulation is a key ingredient for conducting such testing in a cost-effective and scalable way. Neural rendering methods have gained popularity, as they can build simulation environments from collected logs in a data-driven manner. However, existing neural radiance field (NeRF) methods for sensor-realistic rendering of camera and lidar data suffer from low rendering speeds, limiting their applicability for large-scale testing. While 3D Gaussian Splatting (3DGS) enables real-time rendering, current methods are limited to camera data and are unable to render lidar data essential for autonomous driving. To address these limitations, we propose SplatAD, the first 3DGS-based method for realistic, real-time rendering of dynamic scenes for both camera and lidar data. SplatAD accurately models key sensor-specific phenomena such as rolling shutter effects, lidar intensity, and lidar ray dropouts, using purpose-built algorithms to optimize rendering efficiency. Evaluation across three autonomous driving datasets demonstrates that SplatAD achieves state-of-the-art rendering quality with up to +2 PSNR for NVS and +3 PSNR for reconstruction while increasing rendering speed over NeRF-based methods by an order of magnitude. See https://research.zenseact.com/publications/splatad/ for our project page.
翻译:确保自动驾驶车辆等自主机器人的安全性,需要在多样化的驾驶场景中进行广泛测试。仿真是以经济高效且可扩展的方式进行此类测试的关键手段。神经渲染方法因其能够以数据驱动的方式从采集的日志中构建仿真环境而日益受到关注。然而,现有的用于相机和激光雷达数据传感器真实感渲染的神经辐射场(NeRF)方法存在渲染速度低的问题,限制了其在大规模测试中的应用。虽然3D高斯泼溅(3DGS)能够实现实时渲染,但现有方法仅限于相机数据,无法渲染自动驾驶所必需的激光雷达数据。为解决这些局限性,我们提出了SplatAD,这是首个基于3DGS的、能够对动态场景实现相机与激光雷达数据真实感实时渲染的方法。SplatAD通过专门设计的算法优化渲染效率,精确建模了关键的传感器特定现象,如卷帘快门效应、激光雷达强度以及激光雷达射线丢失。在三个自动驾驶数据集上的评估表明,SplatAD在渲染质量上达到了最先进水平,在新视角合成(NVS)任务上PSNR提升高达+2,在重建任务上PSNR提升高达+3,同时渲染速度相比基于NeRF的方法提高了一个数量级。项目页面请访问:https://research.zenseact.com/publications/splatad/。