Video anomaly detection (VAD) focuses on identifying anomalies in videos. Supervised methods demand substantial in-domain training data and fail to deliver clear explanations for anomalies. In contrast, training-free methods leverage the knowledge reserves and language interactivity of large pre-trained models to detect anomalies. However, the current fixed-length temporal window sampling approaches struggle to accurately capture anomalies with varying temporal spans. Therefore, we propose VADTree that utilizes a Hierarchical Granularityaware Tree (HGTree) structure for flexible sampling in VAD. VADTree leverages the knowledge embedded in a pre-trained Generic Event Boundary Detection (GEBD) model to characterize potential anomaly event boundaries. Specifically, VADTree decomposes the video into generic event nodes based on boundary confidence, and performs adaptive coarse-fine hierarchical structuring and redundancy removal to construct the HGTree. Then, the multi-dimensional priors are injected into the visual language models (VLMs) to enhance the node-wise anomaly perception, and anomaly reasoning for generic event nodes is achieved via large language models (LLMs). Finally, an inter-cluster node correlation method is used to integrate the multi-granularity anomaly scores. Extensive experiments on three challenging datasets demonstrate that VADTree achieves state-of-the-art performance in training-free settings while drastically reducing the number of sampled video segments. The code will be available at https://github.com/wenlongli10/VADTree.
翻译:视频异常检测(VAD)旨在识别视频中的异常事件。监督学习方法需要大量域内训练数据,且无法对异常提供清晰的解释。相比之下,免训练方法利用大型预训练模型的知识储备和语言交互能力来检测异常。然而,当前固定长度时间窗口采样方法难以准确捕捉不同时间跨度的异常。为此,我们提出VADTree,利用分层粒度感知树(HGTree)结构在VAD中实现灵活采样。VADTree利用预训练通用事件边界检测(GEBD)模型中嵌入的知识来表征潜在的异常事件边界。具体而言,VADTree基于边界置信度将视频分解为通用事件节点,通过自适应粗细分层构建与冗余消除来建立HGTree。随后,将多维先验知识注入视觉语言模型(VLMs)以增强节点级异常感知,并通过大语言模型(LLMs)实现通用事件节点的异常推理。最后,采用跨簇节点关联方法融合多粒度异常评分。在三个挑战性数据集上的大量实验表明,VADTree在免训练设置下实现了最先进的性能,同时大幅减少了采样的视频片段数量。代码将在https://github.com/wenlongli10/VADTree公开。