Out-of-distribution (OOD) detection is critical for the safe deployment of deep neural networks. State-of-the-art post-hoc methods typically derive OOD scores from the output logits or penultimate feature vector obtained via global average pooling (GAP). We contend that this exclusive reliance on the logit or feature vector discards a rich, complementary signal: the raw channel-wise statistics of the pre-pooling feature map lost in GAP. In this paper, we introduce Catalyst, a post-hoc framework that exploits these under-explored signals. Catalyst computes an input-dependent scaling factor ($γ$) on-the-fly from these raw statistics (e.g., mean, standard deviation, and maximum activation). This $γ$ is then fused with the existing baseline score, multiplicatively modulating it -- an ``elastic scaling'' -- to push the ID and OOD distributions further apart. We demonstrate Catalyst is a generalizable framework: it seamlessly integrates with logit-based methods (e.g., Energy, ReAct, SCALE) and also provides a significant boost to distance-based detectors like KNN. As a result, Catalyst achieves substantial and consistent performance gains, reducing the average False Positive Rate by 32.87 on CIFAR-10 (ResNet-18), 27.94% on CIFAR-100 (ResNet-18), and 22.25% on ImageNet (ResNet-50). Our results highlight the untapped potential of pre-pooling statistics and demonstrate that Catalyst is complementary to existing OOD detection approaches.
翻译:分布外检测对于深度神经网络的安全部署至关重要。当前最先进的后续处理方法通常从全局平均池化获得的输出逻辑值或倒数第二层特征向量中推导出OOD分数。我们认为,这种对逻辑值或特征向量的单一依赖丢弃了丰富且互补的信号:即GAP过程中丢失的池化前特征图的原始通道统计信息。本文提出Catalyst,一种利用这些未被充分探索信号的后续处理框架。Catalyst根据这些原始统计量(如均值、标准差和最大激活值)动态计算输入依赖的缩放因子γ。随后将该γ与现有基线分数融合,通过乘法调制(即"弹性缩放")使类内分布与类外分布进一步分离。我们证明Catalyst是一个通用化框架:它可无缝集成基于逻辑值的方法(如Energy、ReAct、SCALE),同时为基于距离的检测器(如KNN)提供显著性能提升。实验表明,Catalyst实现了持续且显著的性能改进,在CIFAR-10(ResNet-18)上平均误报率降低32.87%,在CIFAR-100(ResNet-18)上降低27.94%,在ImageNet(ResNet-50)上降低22.25%。我们的研究结果揭示了池化前统计量的未开发潜力,并证明Catalyst与现有OOD检测方法具有互补性。