Deep learning models excel when the data distribution during training aligns with testing data. Yet, their performance diminishes when faced with out-of-distribution (OOD) samples, leading to great interest in the field of OOD detection. Current approaches typically assume that OOD samples originate from an unconcentrated distribution complementary to the training distribution. While this assumption is appropriate in the traditional unsupervised OOD (U-OOD) setting, it proves inadequate when considering the place of deployment of the underlying deep learning model. To better reflect this real-world scenario, we introduce the novel setting of continual U-OOD detection. To tackle this new setting, we propose a method that starts from a U-OOD detector, which is agnostic to the OOD distribution, and slowly updates during deployment to account for the actual OOD distribution. Our method uses a new U-OOD scoring function that combines the Mahalanobis distance with a nearest-neighbor approach. Furthermore, we design a confidence-scaled few-shot OOD detector that outperforms previous methods. We show our method greatly improves upon strong baselines from related fields.
翻译:深度学习模型在训练数据分布与测试数据对齐时表现卓越。然而,当面对分布外(OOD)样本时,其性能会下降,这引起了OOD检测领域的极大关注。当前方法通常假设OOD样本来源于一个与训练分布互补的非集中分布。虽然这一假设在传统的无监督OOD(U-OOD)设定中是合适的,但在考虑底层深度学习模型的实际部署环境时,该假设显得不足。为了更好地反映这一现实场景,我们引入了持续U-OOD检测这一新颖设定。为应对这一新设定,我们提出了一种方法,该方法从一个对OOD分布未知的U-OOD检测器出发,在部署过程中缓慢更新以适应实际的OOD分布。我们的方法采用了一种新的U-OOD评分函数,该函数将马氏距离与最近邻方法相结合。此外,我们设计了一种置信度加权的少样本OOD检测器,其性能优于先前方法。实验表明,我们的方法相较于相关领域的强基线模型有显著提升。