Video anomaly detection (VAD) is crucial for intelligent surveillance, but a significant challenge lies in identifying complex anomalies, which are events defined by intricate relationships and temporal dependencies among multiple entities rather than by isolated actions. While self-supervised learning (SSL) methods effectively model low-level spatiotemporal patterns, they often struggle to grasp the semantic meaning of these interactions. Conversely, large language models (LLMs) offer powerful contextual reasoning but are computationally expensive for frame-by-frame analysis and lack fine-grained spatial localization. We introduce HyCoVAD, Hybrid Complex Video Anomaly Detection, a hybrid SSL-LLM model that combines a multi-task SSL temporal analyzer with LLM validator. The SSL module is built upon an nnFormer backbone which is a transformer-based model for image segmentation. It is trained with multiple proxy tasks, learns from video frames to identify those suspected of anomaly. The selected frames are then forwarded to the LLM, which enriches the analysis with semantic context by applying structured, rule-based reasoning to validate the presence of anomalies. Experiments on the challenging ComplexVAD dataset show that HyCoVAD achieves a 72.5% frame-level AUC, outperforming existing baselines by 12.5% while reducing LLM computation. We release our interaction anomaly taxonomy, adaptive thresholding protocol, and code to facilitate future research in complex VAD scenarios.
翻译:视频异常检测(VAD)对于智能监控至关重要,但一个重大挑战在于识别复杂异常。复杂异常是指由多个实体之间错综复杂的关系和时序依赖性所定义的事件,而非孤立的动作。尽管自监督学习(SSL)方法能有效建模低层时空模式,但它们通常难以理解这些交互的语义含义。相反,大型语言模型(LLM)提供了强大的上下文推理能力,但逐帧分析计算成本高昂,且缺乏细粒度的空间定位能力。我们提出了HyCoVAD(混合复杂视频异常检测),这是一种混合SSL-LLM模型,它将一个多任务SSL时序分析器与一个LLM验证器相结合。SSL模块构建于nnFormer骨干网络之上,nnFormer是一种用于图像分割的基于Transformer的模型。该模块通过多个代理任务进行训练,从视频帧中学习以识别疑似异常帧。随后,选定的帧被送入LLM,LLM通过应用结构化、基于规则的推理来验证异常的存在,从而用语义上下文丰富分析。在具有挑战性的ComplexVAD数据集上的实验表明,HyCoVAD实现了72.5%的帧级AUC,优于现有基线方法12.5%,同时减少了LLM的计算量。我们发布了我们的交互异常分类法、自适应阈值协议以及代码,以促进未来在复杂VAD场景中的研究。