We present an efficient incremental SLAM back-end that achieves the accuracy of full batch optimization while substantially reducing computational cost. The proposed approach combines two complementary ideas: information-guided gating (IGG) and selective partial optimization (SPO). IGG employs an information-theoretic criterion based on the log-determinant of the information matrix to quantify the contribution of new measurements, triggering global optimization only when a significant information gain is observed. This avoids unnecessary relinearization and factorization when incoming data provide little additional information. SPO executes multi-iteration Gauss-Newton (GN) updates but restricts each iteration to the subset of variables most affected by the new measurements, dynamically refining this active set until convergence. Together, these mechanisms retain all measurements to preserve global consistency while focusing computation on parts of the graph where it yields the greatest benefit. We provide theoretical analysis showing that the proposed approach maintains the convergence guarantees of full GN. Extensive experiments on benchmark SLAM datasets show that our approach consistently matches the estimation accuracy of batch solvers, while achieving significant computational savings compared to conventional incremental approaches. The results indicate that the proposed approach offers a principled balance between accuracy and efficiency, making it a robust and scalable solution for real-time operation in dynamic data-rich environments.
翻译:本文提出一种高效的增量式SLAM后端方法,在显著降低计算成本的同时,达到了全批量优化的精度水平。该方法融合了两种互补的思想:信息引导门控(IGG)与选择性局部优化(SPO)。IGG采用基于信息矩阵对数行列式的信息论准则来量化新观测的贡献,仅当检测到显著信息增益时才触发全局优化,从而在输入数据提供较少额外信息时避免不必要的重新线性化与矩阵分解操作。SPO执行多轮高斯-牛顿(GN)迭代更新,但将每轮迭代限制在受新观测影响最显著的变量子集上,并动态优化该活跃集直至收敛。这两种机制协同工作,在保留全部观测数据以维持全局一致性的同时,将计算资源集中作用于图结构中收益最大的部分。我们通过理论分析证明,所提方法保持了完整高斯-牛顿法的收敛性保证。在标准SLAM数据集上的大量实验表明,本方法在估计精度上始终与批量求解器相当,同时相比传统增量方法实现了显著的计算效率提升。结果表明,所提方法在精度与效率之间提供了理论均衡的解决方案,为动态数据密集环境下的实时操作提供了鲁棒且可扩展的途径。