Recognizing places from an opposing viewpoint during a return trip is a common experience for human drivers. However, the analogous robotics capability, visual place recognition (VPR) with limited field of view cameras under 180 degree rotations, has proven to be challenging to achieve. To address this problem, this paper presents Same Place Opposing Trajectory (SPOT), a technique for opposing viewpoint VPR that relies exclusively on structure estimated through stereo visual odometry (VO). The method extends recent advances in lidar descriptors and utilizes a novel double (similar and opposing) distance matrix sequence matching method. We evaluate SPOT on a publicly available dataset with 6.7-7.6 km routes driven in similar and opposing directions under various lighting conditions. The proposed algorithm demonstrates remarkable improvement over the state-of-the-art, achieving up to 91.7% recall at 100% precision in opposing viewpoint cases, while requiring less storage than all baselines tested and running faster than all but one. Moreover, the proposed method assumes no a priori knowledge of whether the viewpoint is similar or opposing, and also demonstrates competitive performance in similar viewpoint cases.
翻译:在返程中从对立视角识别地点是人类驾驶员的常见体验。然而,将这一能力赋予机器人——通过有限视场角摄像头在180度旋转条件下实现视觉地点识别(VPR)——已被证明极具挑战性。为解决此问题,本文提出"同地异轨"(SPOT)技术,这是一种仅依赖立体视觉里程计(VO)估计的场景结构实现对立视角VPR的方法。该技术扩展了激光雷达描述符的最新进展,并采用新颖的双重(相似与对立)距离矩阵序列匹配方法。我们在公开数据集上评估SPOT,该数据集包含6.7-7.6公里路径,涵盖不同光照条件下相似方向与对立方向行驶场景。实验表明,所提算法相比现有技术实现显著提升:在对立视角条件下,可在100%精确率下达到最高91.7%的召回率,同时存储需求低于所有基线方法,运行速度仅次于单种对比方法。此外,该方法无需预先知晓视角是否相似或对立,且在相似视角场景中同样展现出竞争性能。