Mobile robots and IoT devices demand real-time localization and dense reconstruction under tight compute and energy budgets. While 3D Gaussian Splatting (3DGS) enables efficient dense SLAM, dynamic objects and occlusions still degrade tracking and mapping. Existing dynamic 3DGS-SLAM often relies on heavy optical flow and per-frame segmentation, which is costly for mobile deployment and brittle under challenging illumination. We present DAGS-SLAM, a dynamic-aware 3DGS-SLAM system that maintains a spatiotemporal motion probability (MP) state per Gaussian and triggers semantics on demand via an uncertainty-aware scheduler. DAGS-SLAM fuses lightweight YOLO instance priors with geometric cues to estimate and temporally update MP, propagates MP to the front-end for dynamic-aware correspondence selection, and suppresses dynamic artifacts in the back-end via MP-guided optimization. Experiments on public dynamic RGB-D benchmarks show improved reconstruction and robust tracking while sustaining real-time throughput on a commodity GPU, demonstrating a practical speed-accuracy tradeoff with reduced semantic invocations toward mobile deployment.
翻译:移动机器人与物联网设备需要在严格的计算与能耗预算下实现实时定位与稠密重建。尽管3D高斯泼溅(3DGS)能够实现高效的稠密SLAM,但动态物体与遮挡仍会降低跟踪与建图性能。现有的动态3DGS-SLAM方法通常依赖计算量大的光流与逐帧分割,这对于移动部署而言成本高昂,且在挑战性光照条件下表现脆弱。本文提出DAGS-SLAM,一种动态感知的3DGS-SLAM系统,该系统为每个高斯点维护一个时空运动概率状态,并通过不确定性感知调度器按需触发语义分析。DAGS-SLAM将轻量级YOLO实例先验与几何线索融合,以估计并时序更新运动概率;将运动概率传播至前端,用于动态感知的对应点选择;并通过运动概率引导的优化在后端抑制动态伪影。在公开动态RGB-D基准测试上的实验表明,该系统在消费级GPU上保持实时吞吐量的同时,提升了重建质量与跟踪鲁棒性,展现了面向移动部署的实用速度-精度权衡,并减少了语义调用次数。