Numerous Out-of-Distribution (OOD) detection algorithms have been developed to identify unknown samples or objects in real-world model deployments. Outlier Exposure (OE) algorithms, a subset of these methods, typically employ auxiliary datasets to train OOD detectors, enhancing the reliability of their predictions. While previous methods have leveraged Stable Diffusion (SD) to generate pixel-space outliers, these can complicate network optimization. We propose an Outlier Aware Learning (OAL) framework, which synthesizes OOD training data directly in the latent space. To regularize the model's decision boundary, we introduce a mutual information-based contrastive learning approach that amplifies the distinction between In-Distribution (ID) and collected OOD features. The efficacy of this contrastive learning technique is supported by both theoretical analysis and empirical results. Furthermore, we integrate knowledge distillation into our framework to preserve in-distribution classification accuracy. The combined application of contrastive learning and knowledge distillation substantially improves OOD detection performance, enabling OAL to outperform other OE methods by a considerable margin. Source code is available at: \url{https://github.com/HengGao12/OAL}.
翻译:众多分布外(OOD)检测算法已被开发用于识别现实世界模型部署中的未知样本或对象。异常暴露(OE)算法作为此类方法的一个子集,通常采用辅助数据集来训练OOD检测器,从而提升其预测的可靠性。尽管先前方法已利用稳定扩散(SD)在像素空间生成异常样本,但这可能使网络优化过程复杂化。我们提出一种异常感知学习(OAL)框架,直接在潜在空间中合成OOD训练数据。为规范模型的决策边界,我们引入一种基于互信息的对比学习方法,以放大分布内(ID)特征与采集的OOD特征之间的差异。该对比学习技术的有效性得到了理论分析和实证结果的双重支持。此外,我们将知识蒸馏整合到框架中以保持分布内分类精度。对比学习与知识蒸馏的协同应用显著提升了OOD检测性能,使OAL以明显优势超越其他OE方法。源代码发布于:\url{https://github.com/HengGao12/OAL}。