Contrastive representation learning has emerged as an outstanding approach for anomaly detection. In this work, we explore the $\ell_2$-norm of contrastive features and its applications in out-of-distribution detection. We propose a simple method based on contrastive learning, which incorporates out-of-distribution data by discriminating against normal samples in the contrastive layer space. Our approach can be applied flexibly as an outlier exposure (OE) approach, where the out-of-distribution data is a huge collective of random images, or as a fully self-supervised learning approach, where the out-of-distribution data is self-generated by applying distribution-shifting transformations. The ability to incorporate additional out-of-distribution samples enables a feasible solution for datasets where AD methods based on contrastive learning generally underperform, such as aerial images or microscopy images. Furthermore, the high-quality features learned through contrastive learning consistently enhance performance in OE scenarios, even when the available out-of-distribution dataset is not diverse enough. Our extensive experiments demonstrate the superiority of our proposed method under various scenarios, including unimodal and multimodal settings, with various image datasets.
翻译:对比表示学习已成为异常检测领域的一种出色方法。在本工作中,我们探讨了对比特征的ℓ₂范数及其在分布外检测中的应用。我们提出了一种基于对比学习的简单方法,该方法通过在对比层空间中区分正常样本来融入分布外数据。我们的方法可以灵活应用:作为异常暴露方法时,分布外数据是大量随机图像的集合;或者作为完全自监督学习方法时,分布外数据通过应用分布移位变换自动生成。融入额外分布外样本的能力为那些基于对比学习的异常检测方法通常表现不佳的数据集(如航拍图像或显微镜图像)提供了可行的解决方案。此外,通过对比学习学到的高质量特征即使在可用分布外数据集多样性不足的情况下,也能持续提升异常暴露场景的性能。我们的大量实验证明了所提方法在多种场景(包括单模态和多模态设置)下使用各类图像数据集时的优越性。