Video Action Detection (VAD) involves localizing and categorizing action instances in videos. Videos inherently contain various information sources, including audio, visual cues, and surrounding scene contexts. Effectively leveraging this multi-modal information for VAD is challenging, as the model must accurately focus on action-relevant cues. In this study, we introduce a novel multi-modal VAD architecture called the Joint Actor-centric Visual, Audio, Language Encoder (JoVALE). JoVALE is the first VAD method to integrate audio and visual features with scene descriptive context derived from large image captioning models. The core principle of JoVALE is the actor-centric aggregation of audio, visual, and scene descriptive contexts, where action-related cues from each modality are identified and adaptively combined. We propose a specialized module called the Actor-centric Multi-modal Fusion Network, designed to capture the joint interactions among actors and multi-modal contexts through Transformer architecture. Our evaluation conducted on three popular VAD benchmarks, AVA, UCF101-24, and JHMDB51-21, demonstrates that incorporating multi-modal information leads to significant performance gains. JoVALE achieves state-of-the-art performances. The code will be available at \texttt{https://github.com/taeiin/AAAI2025-JoVALE}.
翻译:视频动作检测(VAD)涉及对视频中动作实例的定位与分类。视频本身包含多种信息源,包括音频、视觉线索及周围场景上下文。有效利用这种多模态信息进行VAD具有挑战性,因为模型必须准确聚焦于与动作相关的线索。本研究提出一种名为联合以演员为中心的视觉、音频、语言编码器(JoVALE)的新型多模态VAD架构。JoVALE是首个将音频和视觉特征与源自大型图像描述模型的场景描述上下文相融合的VAD方法。JoVALE的核心原理是以演员为中心聚合音频、视觉及场景描述上下文,通过识别各模态中与动作相关的线索并进行自适应整合。我们设计了一个专门模块——以演员为中心的多模态融合网络,该模块通过Transformer架构捕捉演员与多模态上下文之间的联合交互作用。在AVA、UCF101-24和JHMDB51-21这三个主流VAD基准上的评估表明,融合多模态信息可带来显著的性能提升。JoVALE实现了最先进的性能。代码将在\texttt{https://github.com/taeiin/AAAI2025-JoVALE}发布。