Detecting out-of-distribution (OOD) inputs is a critical safeguard for deploying machine learning models in the real world. However, most post-hoc detection methods operate on penultimate feature representations derived from global average pooling (GAP) -- a lossy operation that discards valuable distributional statistics from activation maps prior to global average pooling. We contend that these overlooked statistics, particularly channel-wise variance and dominant (maximum) activations, are highly discriminative for OOD detection. We introduce DAVIS, a simple and broadly applicable post-hoc technique that enriches feature vectors by incorporating these crucial statistics, directly addressing the information loss from GAP. Extensive evaluations show DAVIS sets a new benchmark across diverse architectures, including ResNet, DenseNet, and EfficientNet. It achieves significant reductions in the false positive rate (FPR95), with improvements of 48.26\% on CIFAR-10 using ResNet-18, 38.13\% on CIFAR-100 using ResNet-34, and 26.83\% on ImageNet-1k benchmarks using MobileNet-v2. Our analysis reveals the underlying mechanism for this improvement, providing a principled basis for moving beyond the mean in OOD detection.
翻译:检测分布外(OOD)输入是机器学习模型在现实世界中部署的关键保障。然而,大多数后验检测方法基于通过全局平均池化(GAP)获得的倒数第二层特征表示进行操作——GAP是一种有损操作,其在全局平均池化之前丢弃了激活图中宝贵的分布统计信息。我们认为这些被忽视的统计量,特别是通道级方差和主导(最大)激活,对于OOD检测具有高度区分性。我们提出了DAVIS,一种简单且广泛适用的后验技术,通过融入这些关键统计量来丰富特征向量,直接解决GAP造成的信息损失。大量评估表明,DAVIS在包括ResNet、DenseNet和EfficientNet在内的多种架构上树立了新的性能基准。该方法显著降低了假阳性率(FPR95),在ResNet-18上对CIFAR-10数据集的改进达48.26%,在ResNet-34上对CIFAR-100数据集的改进达38.13%,在MobileNet-v2上对ImageNet-1k基准的改进达26.83%。我们的分析揭示了这一改进的内在机制,为超越均值检测的OOD检测方法提供了理论依据。