In recent years, there have been significant improvements in various forms of image outlier detection. However, outlier detection performance under adversarial settings lags far behind that in standard settings. This is due to the lack of effective exposure to adversarial scenarios during training, especially on unseen outliers, leading to detection models failing to learn robust features. To bridge this gap, we introduce RODEO, a data-centric approach that generates effective outliers for robust outlier detection. More specifically, we show that incorporating outlier exposure (OE) and adversarial training can be an effective strategy for this purpose, as long as the exposed training outliers meet certain characteristics, including diversity, and both conceptual differentiability and analogy to the inlier samples. We leverage a text-to-image model to achieve this goal. We demonstrate both quantitatively and qualitatively that our adaptive OE method effectively generates ``diverse'' and ``near-distribution'' outliers, leveraging information from both text and image domains. Moreover, our experimental results show that utilizing our synthesized outliers significantly enhances the performance of the outlier detector, particularly in adversarial settings.
翻译:近年来,各种形式的图像异常检测取得了显著进展。然而,在对抗性设置下的异常检测性能远落后于标准设置。这是由于训练过程中缺乏对对抗性场景的有效暴露,特别是对未见异常样本的暴露,导致检测模型无法学习到鲁棒特征。为弥合这一差距,我们提出了RODEO——一种以数据为中心的方法,通过生成有效的异常样本来实现鲁棒的异常检测。具体而言,我们证明结合异常暴露(OE)与对抗训练可成为实现该目标的有效策略,前提是暴露的训练异常样本满足特定特性,包括多样性、与内点样本的概念可区分性及类比性。我们利用文本到图像模型来实现这一目标。通过定量与定性分析,我们证明自适应OE方法能有效生成“多样化”且“近分布”的异常样本,同时利用文本和图像域的信息。此外,实验结果表明,使用我们合成的异常样本能显著提升异常检测器的性能,尤其在对抗性设置下表现突出。