To detect distribution shifts and improve model safety, many out-of-distribution (OOD) detection methods rely on the predictive uncertainty or features of supervised models trained on in-distribution data. In this paper, we critically re-examine this popular family of OOD detection procedures, and we argue that these methods are fundamentally answering the wrong questions for OOD detection. There is no simple fix to this misalignment, since a classifier trained only on in-distribution classes cannot be expected to identify OOD points; for instance, a cat-dog classifier may confidently misclassify an airplane if it contains features that distinguish cats from dogs, despite generally appearing nothing alike. We find that uncertainty-based methods incorrectly conflate high uncertainty with being OOD, while feature-based methods incorrectly conflate far feature-space distance with being OOD. We show how these pathologies manifest as irreducible errors in OOD detection and identify common settings where these methods are ineffective. Additionally, interventions to improve OOD detection such as feature-logit hybrid methods, scaling of model and data size, epistemic uncertainty representation, and outlier exposure also fail to address this fundamental misalignment in objectives. We additionally consider unsupervised density estimation and generative models for OOD detection, which we show have their own fundamental limitations.
翻译:为检测分布偏移并提升模型安全性,众多分布外(OOD)检测方法依赖于在分布内数据上训练的有监督模型的预测不确定性或特征表示。本文批判性地重新审视了这一主流的OOD检测方法体系,并论证这些方法本质上在解答错误的OOD检测问题。这种目标错位无法通过简单修正解决,因为仅基于分布内类别训练的模型无法有效识别OOD样本;例如,猫狗分类器可能因飞机包含区分猫狗的特征而对其产生高置信度误判,尽管二者在视觉上毫无相似性。研究发现:基于不确定性的方法错误地将高不确定性与OOD状态等同,而基于特征的方法则错误地将特征空间距离与OOD状态混淆。我们揭示了这些缺陷如何表现为OOD检测中不可约减的系统误差,并识别出这些方法失效的典型场景。此外,现有改进方案——包括特征-逻辑混合方法、模型与数据规模扩展、认知不确定性表征以及异常暴露训练——均未能解决这一根本性的目标错位问题。本文同时考察了无监督密度估计与生成模型在OOD检测中的应用,论证了这些方法同样存在本质局限性。