Segment matching is an important intermediate task in computer vision that establishes correspondences between semantically or geometrically coherent regions across images. Unlike keypoint matching, which focuses on localized features, segment matching captures structured regions, offering greater robustness to occlusions, lighting variations, and viewpoint changes. In this paper, we leverage the spatial understanding of 3D foundation models to tackle wide-baseline segment matching, a challenging setting involving extreme viewpoint shifts. We propose an architecture that uses the inductive bias of these 3D foundation models to match segments across image pairs with up to 180 degree view-point change rotation. Extensive experiments show that our approach outperforms state-of-the-art methods, including the SAM2 video propagator and local feature matching methods, by up to 30% on the AUPRC metric, on ScanNet++ and Replica datasets. We further demonstrate benefits of the proposed model on relevant downstream tasks, including 3D instance mapping and object-relative navigation. Project Page: https://segmast3r.github.io/
翻译:片段匹配是计算机视觉中一项重要的中间任务,旨在建立跨图像语义或几何一致区域之间的对应关系。与专注于局部特征的关键点匹配不同,片段匹配捕获的是结构化区域,从而对遮挡、光照变化和视角变化具有更强的鲁棒性。本文利用三维基础模型的空间理解能力,来解决宽基线片段匹配这一涉及极端视角变化的挑战性场景。我们提出了一种架构,利用这些三维基础模型的归纳偏置,在视角变化旋转高达180度的图像对之间进行片段匹配。大量实验表明,在ScanNet++和Replica数据集上,我们的方法在AUPRC指标上优于包括SAM2视频传播器和局部特征匹配方法在内的最先进方法,提升幅度高达30%。我们进一步展示了所提模型在相关下游任务(包括三维实例建图和物体相对导航)上的优势。项目页面:https://segmast3r.github.io/