Current state-of-the-art multi-class unsupervised anomaly detection (MUAD) methods rely on training encoder-decoder models to reconstruct anomaly-free features. We first show these approaches have an inherent fidelity-stability dilemma in how they detect anomalies via reconstruction residuals. We then abandon the reconstruction paradigm entirely and propose Retrieval-based Anomaly Detection (RAD). RAD is a training-free approach that stores anomaly-free features in a memory and detects anomalies through multi-level retrieval, matching test patches against the memory. Experiments demonstrate that RAD achieves state-of-the-art performance across four established benchmarks (MVTec-AD, VisA, Real-IAD, 3D-ADAM) under both standard and few-shot settings. On MVTec-AD, RAD reaches 96.7\% Pixel AUROC with just a single anomaly-free image compared to 98.5\% of RAD's full-data performance. We further prove that retrieval-based scores theoretically upper-bound reconstruction-residual scores. Collectively, these findings overturn the assumption that MUAD requires task-specific training, showing that state-of-the-art anomaly detection is feasible with memory-based retrieval. Our code is available at https://github.com/longkukuhi/RAD.
翻译:当前最先进的多类别无监督异常检测方法依赖于训练编码器-解码器模型以重构无异常特征。我们首先揭示了这类方法在通过重构残差检测异常时存在固有的保真度-稳定性困境。随后,我们完全摒弃重构范式,提出了基于检索的异常检测方法。该方法是一种免训练方案,将无异常特征存储于记忆库中,并通过多级检索机制,将测试图像块与记忆库进行匹配以实现异常检测。实验表明,该方法在四个标准基准数据集(MVTec-AD、VisA、Real-IAD、3D-ADAM)的标准设置和少样本设置下均达到了最先进的性能。在MVTec-AD数据集上,仅使用单张无异常图像即可实现96.7%的像素级AUROC,而其全数据性能可达98.5%。我们进一步从理论上证明了基于检索的评分是重构残差评分的理论上界。这些发现共同颠覆了"多类别无监督异常检测需要任务特定训练"的固有认知,表明基于记忆检索的方案同样能实现最先进的异常检测性能。代码已开源:https://github.com/longkukuhi/RAD。