In the last twenty years, Structure from Motion (SfM) has been a constant research hotspot in the fields of photogrammetry, computer vision, robotics etc., whereas real-time performance is just a recent topic of growing interest. This work builds upon the original on-the-fly SfM (Zhan et al., 2024) and presents an updated version with three new advancements to get better 3D from what you capture: (i) real-time image matching is further boosted by employing the Hierarchical Navigable Small World (HNSW) graphs, thus more true positive overlapping image candidates are faster identified; (ii) a self-adaptive weighting strategy is proposed for robust hierarchical local bundle adjustment to improve the SfM results; (iii) multiple agents are included for supporting collaborative SfM and seamlessly merge multiple 3D reconstructions into a complete 3D scene when commonly registered images appear. Various comprehensive experiments demonstrate that the proposed SfM method (named on-the-fly SfMv2) can generate more complete and robust 3D reconstructions in a high time-efficient way. Code is available at http://yifeiyu225.github.io/on-the-flySfMv2.github.io/.
翻译:过去二十年间,运动恢复结构(SfM)一直是摄影测量、计算机视觉、机器人等领域持续的研究热点,而实时性能则是近年日益受到关注的新议题。本研究基于原始实时SfM框架(Zhan等人,2024),提出包含三项新进展的改进版本,旨在从捕获内容中获取更优三维重建效果:(一)通过采用分层可导航小世界(HNSW)图进一步加速实时图像匹配,从而更快识别更多真实正例重叠图像候选;(二)提出自适应加权策略用于鲁棒的分层局部光束法平差,以提升SfM重建质量;(三)引入多智能体系统以支持协同SfM,当出现共同配准图像时,可无缝融合多个三维重建结果为完整三维场景。大量综合实验表明,所提出的SfM方法(命名为实时SfMv2)能够以高时效性生成更完整、更鲁棒的三维重建结果。代码发布于http://yifeiyu225.github.io/on-the-flySfMv2.github.io/。