Generative models have demonstrated significant success in anomaly detection and segmentation over the past decade. Recently, diffusion models have emerged as a powerful alternative, outperforming previous approaches such as GANs and VAEs. In typical diffusion-based anomaly detection, a model is trained on normal data, and during inference, anomalous images are perturbed to a predefined intermediate step in the forward diffusion process. The corresponding normal image is then reconstructed through iterative reverse sampling. However, reconstruction-based approaches present three major challenges: (1) the reconstruction process is computationally expensive due to multiple sampling steps, making real-time applications impractical; (2) for complex or subtle patterns, the reconstructed image may correspond to a different normal pattern rather than the original input; and (3) Choosing an appropriate intermediate noise level is challenging because it is application-dependent and often assumes prior knowledge of anomalies, an assumption that does not hold in unsupervised settings. We introduce Reconstruction-free Anomaly Detection with Attention-based diffusion models in Real-time (RADAR), which overcomes the limitations of reconstruction-based anomaly detection. Unlike current SOTA methods that reconstruct the input image, RADAR directly produces anomaly maps from the diffusion model, improving both detection accuracy and computational efficiency. We evaluate RADAR on real-world 3D-printed material and the MVTec-AD dataset. Our approach surpasses state-of-the-art diffusion-based and statistical machine learning models across all key metrics, including accuracy, precision, recall, and F1 score. Specifically, RADAR improves F1 score by 7% on MVTec-AD and 13% on the 3D-printed material dataset compared to the next best model. Code available at: https://github.com/mehrdadmoradi124/RADAR
翻译:生成模型在过去十年中在异常检测与分割领域取得了显著成功。最近,扩散模型作为一种强大的替代方案崭露头角,其性能超越了先前的方法如GANs和VAEs。在典型的基于扩散的异常检测中,模型在正常数据上进行训练,在推理过程中,异常图像被扰动至前向扩散过程的预定义中间步骤,随后通过迭代反向采样重建相应的正常图像。然而,基于重构的方法存在三个主要挑战:(1) 由于需要多步采样,重构过程计算成本高昂,难以实现实时应用;(2) 对于复杂或细微的异常模式,重构图像可能对应不同的正常模式而非原始输入;(3) 选择合适的中间噪声水平具有挑战性,因为这依赖于具体应用场景,且通常假设已知异常的先验知识,这一假设在无监督设置中并不成立。我们提出了基于注意力的实时扩散模型无重构异常检测方法(RADAR),克服了基于重构的异常检测的局限性。与当前通过重构输入图像的最先进方法不同,RADAR直接从扩散模型生成异常图,从而提升了检测精度与计算效率。我们在真实世界3D打印材料和MVTec-AD数据集上评估了RADAR。我们的方法在所有关键指标(包括准确率、精确率、召回率和F1分数)上均超越了基于扩散的模型和统计机器学习模型的最先进水平。具体而言,与次优模型相比,RADAR在MVTec-AD数据集上将F1分数提升了7%,在3D打印材料数据集上提升了13%。代码发布于:https://github.com/mehrdadmoradi124/RADAR