Large language models (LLMs) excel at retrieving information from lengthy text, but their vision-language counterparts (VLMs) face difficulties with hour-long videos, especially for temporal grounding. Specifically, these VLMs are constrained by frame limitations, often losing essential temporal details needed for accurate event localization in extended video content. We propose ReVisionLLM, a recursive vision-language model designed to locate events in hour-long videos. Inspired by human search strategies, our model initially targets broad segments of interest, progressively revising its focus to pinpoint exact temporal boundaries. Our model can seamlessly handle videos of vastly different lengths, from minutes to hours. We also introduce a hierarchical training strategy that starts with short clips to capture distinct events and progressively extends to longer videos. To our knowledge, ReVisionLLM is the first VLM capable of temporal grounding in hour-long videos, outperforming previous state-of-the-art methods across multiple datasets by a significant margin (+2.6% R1@0.1 on MAD). The code is available at https://github.com/Tanveer81/ReVisionLLM.
翻译:大语言模型(LLM)擅长从长篇文本中检索信息,但其视觉语言模型(VLM)对应物在处理长达数小时的视频时面临困难,尤其在时序定位任务上。具体而言,现有VLM受限于帧数约束,常在处理长视频内容时丢失精确定位事件所需的关键时序细节。本文提出ReVisionLLM,一种递归视觉语言模型,专为在小时级视频中定位事件而设计。受人类搜索策略启发,本模型首先定位大致的兴趣片段,随后通过渐进式修正逐步聚焦至精确的时间边界。该模型能够无缝处理从分钟级到小时级的不同长度视频。我们还提出分层训练策略:从短片段开始学习区分性事件特征,逐步扩展至长视频训练。据我们所知,ReVisionLLM是首个具备小时级视频时序定位能力的VLM,在多个数据集上以显著优势超越先前最优方法(在MAD数据集上R1@0.1指标提升+2.6%)。代码已开源:https://github.com/Tanveer81/ReVisionLLM。