As a fundamental task in long-form video understanding, temporal action detection (TAD) aims to capture inherent temporal relations in untrimmed videos and identify candidate actions with precise boundaries. Over the years, various networks, including convolutions, graphs, and transformers, have been explored for effective temporal modeling for TAD. However, these modules typically treat past and future information equally, overlooking the crucial fact that changes in action boundaries are essentially causal events. Inspired by this insight, we propose leveraging the temporal causality of actions to enhance TAD representation by restricting the model's access to only past or future context. We introduce CausalTAD, which combines causal attention and causal Mamba to achieve state-of-the-art performance on multiple benchmarks. Notably, with CausalTAD, we ranked 1st in the Action Recognition, Action Detection, and Audio-Based Interaction Detection tracks at the EPIC-Kitchens Challenge 2024, as well as 1st in the Moment Queries track at the Ego4D Challenge 2024. Our code is available at https://github.com/sming256/OpenTAD/.
翻译:作为长视频理解中的一项基础任务,时序动作检测旨在捕捉未剪辑视频中固有的时序关系,并以精确边界识别候选动作。多年来,研究者已探索了包括卷积网络、图网络和Transformer在内的多种网络架构,以进行有效的时序建模。然而,这些模块通常平等对待过去与未来信息,忽略了动作边界变化本质上是因果事件这一关键事实。受此启发,我们提出利用动作的时序因果关系来增强时序动作检测的表征能力,具体方法是通过限制模型仅能访问过去或未来的上下文信息。我们提出了CausalTAD模型,该模型结合了因果注意力机制与因果Mamba模块,在多个基准测试中取得了最先进的性能。值得注意的是,基于CausalTAD,我们在EPIC-Kitchens 2024挑战赛的动作识别、动作检测和基于音频的交互检测赛道,以及Ego4D 2024挑战赛的Moment Queries赛道中均获得了第一名。我们的代码已开源:https://github.com/sming256/OpenTAD/。