Video Temporal Grounding (VTG) is a crucial capability for video understanding models and plays a vital role in downstream tasks such as video browsing and editing. To effectively handle various tasks simultaneously and enable zero-shot prediction, there is a growing trend in employing video LLMs for VTG tasks. However, current video LLM-based methods rely exclusively on natural language generation, lacking the ability to model the clear structure inherent in videos, which restricts their effectiveness in tackling VTG tasks. To address this issue, this paper first formally introduces causal event modeling framework, which represents videos as sequences of events, and predict the current event using previous events, video inputs, and textural instructions. Each event consists of three components: timestamps, salient scores, and textual captions. We then propose a novel task-interleaved video LLM called TRACE to effectively implement the causal event modeling framework in practice. The TRACE processes visual frames, timestamps, salient scores, and text as distinct tasks, employing various encoders and decoding heads for each. Task tokens are arranged in an interleaved sequence according to the causal event modeling framework's formulation. Extensive experiments on various VTG tasks and datasets demonstrate the superior performance of TRACE compared to state-of-the-art video LLMs. Our model and code are available at \url{https://github.com/gyxxyg/TRACE}.
翻译:视频时序定位(VTG)是视频理解模型的关键能力,在视频浏览与编辑等下游任务中具有重要作用。为同时有效处理多种任务并实现零样本预测,采用视频大语言模型处理VTG任务已成为趋势。然而,当前基于视频大语言模型的方法仅依赖自然语言生成,缺乏对视频内在清晰结构的建模能力,限制了其在VTG任务中的有效性。为解决此问题,本文首先正式提出因果事件建模框架,将视频表征为事件序列,并利用先前事件、视频输入及文本指令预测当前事件。每个事件包含三个组成部分:时间戳、显著度分数及文本描述。随后,我们提出名为TRACE的新型任务交错视频大语言模型,在实践中有效实现因果事件建模框架。TRACE将视觉帧、时间戳、显著度分数及文本作为独立任务处理,为各项任务采用不同的编码器与解码头。任务标记根据因果事件建模框架的表述,按交错序列进行排列。在多种VTG任务和数据集上的大量实验表明,TRACE相较于现有最先进的视频大语言模型具有更优越的性能。我们的模型与代码公开于 \url{https://github.com/gyxxyg/TRACE}。