Streaming reconstruction from uncalibrated monocular video remains challenging, as it requires both high-precision pose estimation and computationally efficient online refinement in dynamic environments. While coupling 3D foundation models with SLAM frameworks is a promising paradigm, a critical bottleneck persists: most multi-view foundation models estimate poses in a feed-forward manner, yielding pixel-level correspondences that lack the requisite precision for rigorous geometric optimization. To address this, we present M^3, which augments the Multi-view foundation model with a dedicated Matching head to facilitate fine-grained dense correspondences and integrates it into a robust Monocular Gaussian Splatting SLAM. M^3 further enhances tracking stability by incorporating dynamic area suppression and cross-inference intrinsic alignment. Extensive experiments on diverse indoor and outdoor benchmarks demonstrate state-of-the-art accuracy in both pose estimation and scene reconstruction. Notably, M^3 reduces ATE RMSE by 64.3% compared to VGGT-SLAM 2.0 and outperforms ARTDECO by 2.11 dB in PSNR on the ScanNet++ dataset.
翻译:从未标定单目视频流进行实时三维重建仍具挑战性,因其需要在动态环境中同时实现高精度位姿估计与计算高效的在线优化。尽管将三维基础模型与SLAM框架结合是前景广阔的范式,但关键瓶颈依然存在:多数多视角基础模型以前馈方式估计位姿,产生的像素级对应关系缺乏严格几何优化所需精度。为此,我们提出M^3系统,通过为多视角基础模型配备专用匹配头来获取细粒度密集对应关系,并将其集成至鲁棒的单目高斯溅射SLAM框架。M^3进一步引入动态区域抑制与跨推理内参对齐机制以提升跟踪稳定性。在多样化室内外基准测试上的大量实验表明,该系统在位姿估计与场景重建方面均达到最先进精度。值得注意的是,在ScanNet++数据集上,M^3相较于VGGT-SLAM 2.0将ATE RMSE降低64.3%,并在PSNR指标上超越ARTDECO达2.11 dB。