Recent studies highlighted a practical setting of unsupervised anomaly detection (UAD) that builds a unified model for multi-class images. Despite various advancements addressing this challenging task, the detection performance under the multi-class setting still lags far behind state-of-the-art class-separated models. Our research aims to bridge this substantial performance gap. In this paper, we introduce a minimalistic reconstruction-based anomaly detection framework, namely Dinomaly, which leverages pure Transformer architectures without relying on complex designs, additional modules, or specialized tricks. Given this powerful framework consisted of only Attentions and MLPs, we found four simple components that are essential to multi-class anomaly detection: (1) Foundation Transformers that extracts universal and discriminative features, (2) Noisy Bottleneck where pre-existing Dropouts do all the noise injection tricks, (3) Linear Attention that naturally cannot focus, and (4) Loose Reconstruction that does not force layer-to-layer and point-by-point reconstruction. Extensive experiments are conducted across popular anomaly detection benchmarks including MVTec-AD, VisA, and Real-IAD. Our proposed Dinomaly achieves impressive image-level AUROC of 99.6%, 98.7%, and 89.3% on the three datasets respectively, which is not only superior to state-of-the-art multi-class UAD methods, but also achieves the most advanced class-separated UAD records.
翻译:近期研究突显了一种实用的无监督异常检测设置,即构建适用于多类别图像的统一模型。尽管已有多种方法尝试解决这一挑战性任务,但多类别设置下的检测性能仍远落后于最先进的类别分离模型。我们的研究旨在弥合这一显著的性能差距。本文提出了一种极简的基于重构的异常检测框架——Dinomaly,该框架利用纯Transformer架构,不依赖复杂设计、额外模块或专门技巧。在这一仅由注意力机制和多层感知机构成的强大框架中,我们发现了四个对多类别异常检测至关重要的简单组件:(1) 提取通用判别性特征的基础Transformer,(2) 由预置Dropout实现所有噪声注入技巧的噪声瓶颈层,(3) 天然无法聚焦的线性注意力机制,以及(4) 不强制层间逐点重构的宽松重构策略。我们在MVTec-AD、VisA和Real-IAD等主流异常检测基准上进行了广泛实验。所提出的Dinomaly在三个数据集上分别取得了99.6%、98.7%和89.3%的优异图像级AUROC指标,不仅超越了当前最先进的多类别无监督异常检测方法,更达到了类别分离无监督异常检测的最先进记录。