Photorealistic 3D scene reconstruction plays an important role in autonomous driving, enabling the generation of novel data from existing datasets to simulate safety-critical scenarios and expand training data without additional acquisition costs. Gaussian Splatting (GS) facilitates real-time, photorealistic rendering with an explicit 3D Gaussian representation of the scene, providing faster processing and more intuitive scene editing than the implicit Neural Radiance Fields (NeRFs). While extensive GS research has yielded promising advancements in autonomous driving applications, they overlook two critical aspects: First, existing methods mainly focus on low-speed and feature-rich urban scenes and ignore the fact that highway scenarios play a significant role in autonomous driving. Second, while LiDARs are commonplace in autonomous driving platforms, existing methods learn primarily from images and use LiDAR only for initial estimates or without precise sensor modeling, thus missing out on leveraging the rich depth information LiDAR offers and limiting the ability to synthesize LiDAR data. In this paper, we propose a novel GS method for dynamic scene synthesis and editing with improved scene reconstruction through LiDAR supervision and support for LiDAR rendering. Unlike prior works that are tested mostly on urban datasets, to the best of our knowledge, we are the first to focus on the more challenging and highly relevant highway scenes for autonomous driving, with sparse sensor views and monotone backgrounds.
翻译:逼真的三维场景重建在自动驾驶中扮演着重要角色,它能够从现有数据集中生成新颖数据,以模拟安全关键场景,并在无需额外采集成本的情况下扩展训练数据。高斯溅射(GS)通过场景的显式三维高斯表示,实现了实时、逼真的渲染,相比隐式的神经辐射场(NeRF)提供了更快的处理速度和更直观的场景编辑能力。尽管广泛的GS研究在自动驾驶应用中取得了有前景的进展,但这些研究忽视了两个方面:首先,现有方法主要关注低速且特征丰富的城市场景,忽略了高速公路场景在自动驾驶中的重要作用。其次,虽然激光雷达在自动驾驶平台上已普遍应用,但现有方法主要从图像中学习,仅将激光雷达用于初始估计或未进行精确的传感器建模,因而未能充分利用激光雷达提供的丰富深度信息,限制了合成激光雷达数据的能力。本文提出了一种新颖的GS方法,用于动态场景合成与编辑,通过激光雷达监督改进场景重建,并支持激光雷达渲染。与先前主要在城市场景数据集上进行测试的工作不同,据我们所知,我们是首个专注于自动驾驶中更具挑战性且高度相关的高速公路场景的研究,该场景具有稀疏的传感器视角和单调的背景。