Realtime 4D reconstruction for dynamic scenes remains a crucial challenge for autonomous driving perception. Most existing methods rely on depth estimation through self-supervision or multi-modality sensor fusion. In this paper, we propose Driv3R, a DUSt3R-based framework that directly regresses per-frame point maps from multi-view image sequences. To achieve streaming dense reconstruction, we maintain a memory pool to reason both spatial relationships across sensors and dynamic temporal contexts to enhance multi-view 3D consistency and temporal integration. Furthermore, we employ a 4D flow predictor to identify moving objects within the scene to direct our network focus more on reconstructing these dynamic regions. Finally, we align all per-frame pointmaps consistently to the world coordinate system in an optimization-free manner. We conduct extensive experiments on the large-scale nuScenes dataset to evaluate the effectiveness of our method. Driv3R outperforms previous frameworks in 4D dynamic scene reconstruction, achieving 15x faster inference speed compared to methods requiring global alignment. Code: https://github.com/Barrybarry-Smith/Driv3R.
翻译:动态场景的实时4D重建仍是自动驾驶感知领域的关键挑战。现有方法大多依赖自监督或多模态传感器融合进行深度估计。本文提出Driv3R,一种基于DUSt3R的框架,可直接从多视角图像序列回归逐帧点云图。为实现流式密集重建,我们维护一个记忆池来推理传感器间的空间关系与动态时序上下文,以增强多视角3D一致性与时序整合。此外,我们采用4D光流预测器识别场景中的运动物体,从而引导网络更专注于重建这些动态区域。最后,我们以无需优化的方式将所有逐帧点云图统一配准至世界坐标系。我们在大规模nuScenes数据集上进行了广泛实验以评估方法的有效性。Driv3R在4D动态场景重建任务中优于现有框架,相比需要全局配准的方法实现了15倍的推理加速。代码:https://github.com/Barrybarry-Smith/Driv3R。