Photorealistic 3D reconstruction of street scenes is a critical technique for developing real-world simulators for autonomous driving. Despite the efficacy of Neural Radiance Fields (NeRF) for driving scenes, 3D Gaussian Splatting (3DGS) emerges as a promising direction due to its faster speed and more explicit representation. However, most existing street 3DGS methods require tracked 3D vehicle bounding boxes to decompose the static and dynamic elements for effective reconstruction, limiting their applications for in-the-wild scenarios. To facilitate efficient 3D scene reconstruction without costly annotations, we propose a self-supervised street Gaussian ($\textit{S}^3$Gaussian) method to decompose dynamic and static elements from 4D consistency. We represent each scene with 3D Gaussians to preserve the explicitness and further accompany them with a spatial-temporal field network to compactly model the 4D dynamics. We conduct extensive experiments on the challenging Waymo-Open dataset to evaluate the effectiveness of our method. Our $\textit{S}^3$Gaussian demonstrates the ability to decompose static and dynamic scenes and achieves the best performance without using 3D annotations. Code is available at: https://github.com/nnanhuang/S3Gaussian/.
翻译:街景场景的逼真三维重建是开发自动驾驶真实世界模拟器的关键技术。尽管神经辐射场(NeRF)在驾驶场景中表现优异,但三维高斯泼溅(3DGS)因其更快的速度和更显式的表示而成为一个有前景的方向。然而,现有的大多数街景3DGS方法需要已跟踪的三维车辆边界框来分解静态和动态元素以实现有效重建,这限制了其在开放场景中的应用。为了在无需昂贵标注的情况下促进高效的三维场景重建,我们提出了一种自监督街景高斯($\textit{S}^4$Gaussian)方法,利用四维一致性来分解动态与静态元素。我们使用三维高斯来表示每个场景以保持其显式性,并进一步结合一个时空场网络来紧凑地建模四维动态。我们在具有挑战性的Waymo-Open数据集上进行了广泛的实验,以评估我们方法的有效性。我们的$\textit{S}^3$Gaussian展示了分解静态与动态场景的能力,并在不使用三维标注的情况下取得了最佳性能。代码发布于:https://github.com/nnanhuang/S3Gaussian/。