In this paper, we introduce \textbf{SLAM3R}, a novel and effective monocular RGB SLAM system for real-time and high-quality dense 3D reconstruction. SLAM3R provides an end-to-end solution by seamlessly integrating local 3D reconstruction and global coordinate registration through feed-forward neural networks. Given an input video, the system first converts it into overlapping clips using a sliding window mechanism. Unlike traditional pose optimization-based methods, SLAM3R directly regresses 3D pointmaps from RGB images in each window and progressively aligns and deforms these local pointmaps to create a globally consistent scene reconstruction - all without explicitly solving any camera parameters. Experiments across datasets consistently show that SLAM3R achieves state-of-the-art reconstruction accuracy and completeness while maintaining real-time performance at 20+ FPS. Code and weights at: \url{https://github.com/PKU-VCL-3DV/SLAM3R}.
翻译:本文提出了一种新颖且有效的单目RGB SLAM系统——**SLAM3R**,用于实现实时、高质量的稠密三维重建。SLAM3R通过前馈神经网络无缝集成局部三维重建与全局坐标配准,提供了一个端到端的解决方案。给定输入视频,系统首先利用滑动窗口机制将其转换为重叠的视频片段。与传统的基于位姿优化的方法不同,SLAM3R直接从每个窗口的RGB图像中回归出三维点云图,并逐步对齐和变形这些局部点云图,以创建全局一致的场景重建——整个过程无需显式求解任何相机参数。跨数据集的实验一致表明,SLAM3R在保持20+ FPS实时性能的同时,实现了最先进的重建精度与完整性。代码与权重见:\url{https://github.com/PKU-VCL-3DV/SLAM3R}。