In real scenarios, videos can span several minutes or even hours. However, existing research on spatio-temporal video grounding (STVG), given a textual query, mainly focuses on localizing targets in short videos of tens of seconds, typically less than one minute, which limits real-world applications. In this paper, we explore Long-Form STVG (LF-STVG), which aims to locate targets in long-term videos. Compared with short videos, long-term videos contain much longer temporal spans and more irrelevant information, making it difficult for existing STVG methods that process all frames at once. To address this challenge, we propose an AutoRegressive Transformer architecture for LF-STVG, termed ART-STVG. Unlike conventional STVG methods that require the entire video sequence to make predictions at once, ART-STVG treats the video as streaming input and processes frames sequentially, enabling efficient handling of long videos. To model spatio-temporal context, we design spatial and temporal memory banks and apply them to the decoders. Since memories from different moments are not always relevant to the current frame, we introduce simple yet effective memory selection strategies to provide more relevant information to the decoders, significantly improving performance. Furthermore, instead of parallel spatial and temporal localization, we propose a cascaded spatio-temporal design that connects the spatial decoder to the temporal decoder, allowing fine-grained spatial cues to assist complex temporal localization in long videos. Experiments on newly extended LF-STVG datasets show that ART-STVG significantly outperforms state-of-the-art methods, while achieving competitive performance on conventional short-form STVG.
翻译:在现实场景中,视频时长可达数分钟甚至数小时。然而,现有基于文本查询的时空视频定位研究主要集中于在数十秒(通常短于一分钟)的短视频中定位目标,这限制了其实际应用。本文探索长时空中视频定位,其目标是在长时视频中定位目标。与短视频相比,长时视频包含更长的时序跨度与更多无关信息,这使得现有一次性处理所有帧的STVG方法难以应对。为应对这一挑战,我们提出一种用于LF-STVG的自回归Transformer架构,称为ART-STVG。与需要一次性处理整个视频序列以进行预测的传统STVG方法不同,ART-STVG将视频视为流式输入并顺序处理帧,从而能够高效处理长视频。为建模时空上下文,我们设计了空间与时间记忆库并将其应用于解码器。由于不同时刻的记忆并非总与当前帧相关,我们引入了简单而有效的记忆选择策略,为解码器提供更相关的信息,显著提升了性能。此外,我们提出了一种级联的时空设计,将空间解码器与时间解码器相连接,取代了并行的空间与时间定位方式,使得细粒度的空间线索能够辅助长视频中复杂的时序定位。在新扩展的LF-STVG数据集上的实验表明,ART-STVG显著优于现有最优方法,同时在传统的短时STVG任务上也取得了具有竞争力的性能。