Recent progress in self-supervised representation learning has resulted in models that are capable of extracting image features that are not only effective at encoding image level, but also pixel-level, semantics. These features have been shown to be effective for dense visual semantic correspondence estimation, even outperforming fully-supervised methods. Nevertheless, current self-supervised approaches still fail in the presence of challenging image characteristics such as symmetries and repeated parts. To address these limitations, we propose a new approach for semantic correspondence estimation that supplements discriminative self-supervised features with 3D understanding via a weak geometric spherical prior. Compared to more involved 3D pipelines, our model only requires weak viewpoint information, and the simplicity of our spherical representation enables us to inject informative geometric priors into the model during training. We propose a new evaluation metric that better accounts for repeated part and symmetry-induced mistakes. We present results on the challenging SPair-71k dataset, where we show that our approach demonstrates is capable of distinguishing between symmetric views and repeated parts across many object categories, and also demonstrate that we can generalize to unseen classes on the AwA dataset.
翻译:自监督表示学习的最新进展催生了一批能够提取图像特征的模型,这些特征不仅在编码图像级别语义方面表现优异,在像素级别语义编码上同样有效。研究表明,这些特征对于密集视觉语义对应估计非常有效,甚至超越了全监督方法。然而,当前的自监督方法在面对具有挑战性的图像特性(如对称性和重复部件)时仍然存在不足。为克服这些局限性,我们提出了一种新的语义对应估计方法,该方法通过弱几何球面先验引入三维理解,以增强判别性自监督特征。相较于更复杂的三维流程,我们的模型仅需弱视角信息,且球面表示的简洁性使我们能够在训练过程中向模型注入信息丰富的几何先验。我们提出了一种新的评估指标,能更好地处理重复部件和对称性导致的错误。我们在具有挑战性的SPair-71k数据集上展示了实验结果,表明我们的方法能够区分多个物体类别中的对称视角与重复部件,并证明该方法在AwA数据集上具备向未见类别的泛化能力。