Universal visual anomaly detection (AD) aims to identify anomaly images and segment anomaly regions towards open and dynamic scenarios, following zero- and few-shot paradigms without any dataset-specific fine-tuning. We have witnessed significant progress in widely use of visual-language foundational models in recent approaches. However, current methods often struggle with complex prompt engineering, elaborate adaptation modules, and challenging training strategies, ultimately limiting their flexibility and generality. To address these issues, this paper rethinks the fundamental mechanism behind visual-language models for AD and presents an embarrassingly simple, general, and effective framework for Universal vision Anomaly Detection (UniADet). Specifically, we first find language encoder is used to derive decision weights for anomaly classification and segmentation, and then demonstrate that it is unnecessary for universal AD. Second, we propose an embarrassingly simple method to completely decouple classification and segmentation, and decouple cross-level features, i.e., learning independent weights for different tasks and hierarchical features. UniADet is highly simple (learning only decoupled weights), parameter-efficient (only 0.002M learnable parameters), general (adapting a variety of foundation models), and effective (surpassing state-of-the-art zero-/few-shot by a large margin and even full-shot AD methods for the first time) on 14 real-world AD benchmarks covering both industrial and medical domains. We will make the code and model of UniADet available at https://github.com/gaobb/UniADet.
翻译:通用视觉异常检测(AD)旨在面向开放动态场景,遵循零样本和少样本范式,在无需任何数据集特定微调的情况下识别异常图像并分割异常区域。近年来,视觉-语言基础模型在各类方法中得到广泛应用,我们见证了其显著进展。然而,当前方法往往受限于复杂的提示工程、精细的适配模块和具有挑战性的训练策略,最终限制了其灵活性与通用性。为解决这些问题,本文重新思考了视觉-语言模型在异常检测中的基础机制,并提出了一种极其简洁、通用且有效的通用视觉异常检测框架(UniADet)。具体而言,我们首先发现语言编码器被用于推导异常分类与分割的决策权重,随后论证了该组件在通用异常检测中并非必需。其次,我们提出了一种极其简洁的方法,将分类与分割任务完全解耦,并实现跨层级特征解耦,即为不同任务和层级特征学习独立的权重。UniADet具有高度简洁性(仅学习解耦权重)、参数高效性(仅含0.002M可学习参数)、强通用性(适配多种基础模型)和卓越有效性(在涵盖工业与医疗领域的14个真实世界异常检测基准测试中,大幅超越当前最优的零样本/少样本方法,并首次超越全监督异常检测方法)。我们将通过https://github.com/gaobb/UniADet公开UniADet的代码与模型。