Recent advances in out-of-distribution (OOD) detection on image data show that pre-trained neural network classifiers can separate in-distribution (ID) from OOD data well, leveraging the class-discriminative ability of the model itself. Methods have been proposed that either use logit information directly or that process the model's penultimate layer activations. With "WeiPer", we introduce perturbations of the class projections in the final fully connected layer which creates a richer representation of the input. We show that this simple trick can improve the OOD detection performance of a variety of methods and additionally propose a distance-based method that leverages the properties of the augmented WeiPer space. We achieve state-of-the-art OOD detection results across multiple benchmarks of the OpenOOD framework, especially pronounced in difficult settings in which OOD samples are positioned close to the training set distribution. We support our findings with theoretical motivations and empirical observations, and run extensive ablations to provide insights into why WeiPer works.
翻译:近期在图像数据分布外检测方面的进展表明,预训练的神经网络分类器能够有效区分分布内数据与分布外数据,这得益于模型自身的类别判别能力。现有方法或直接利用逻辑值信息,或对模型的倒数第二层激活进行处理。本文提出的"WeiPer"方法通过对最终全连接层中的类别投影施加扰动,构建了更丰富的输入表征。我们证明这一简单技巧能够提升多种方法的分布外检测性能,并进一步提出一种基于距离的方法,该方法利用了增强型WeiPer空间的性质。我们在OpenOOD框架的多个基准测试中取得了最先进的分布外检测结果,尤其在分布外样本与训练集分布极为接近的困难场景中表现尤为突出。我们通过理论推导与实证观察验证了发现,并进行了大量消融实验以揭示WeiPer的有效性机制。