Video Temporal Grounding (VTG), which aims to localize video clips corresponding to natural language queries, is a fundamental yet challenging task in video understanding. Existing Transformer-based methods often suffer from redundant attention and suboptimal multi-modal alignment. To address these limitations, we propose MLVTG, a novel framework that integrates two key modules: MambaAligner and LLMRefiner. MambaAligner uses stacked Vision Mamba blocks as a backbone instead of Transformers to model temporal dependencies and extract robust video representations for multi-modal alignment. LLMRefiner leverages the specific frozen layer of a pre-trained Large Language Model (LLM) to implicitly transfer semantic priors, enhancing multi-modal alignment without fine-tuning. This dual alignment strategy, temporal modeling via structured state-space dynamics and semantic purification via textual priors, enables more precise localization. Extensive experiments on QVHighlights, Charades-STA, and TVSum demonstrate that MLVTG achieves state-of-the-art performance and significantly outperforms existing baselines.
翻译:视频时序定位(VTG)旨在根据自然语言查询定位对应的视频片段,是视频理解领域一项基础且具有挑战性的任务。现有的基于Transformer的方法常面临注意力冗余和多模态对齐欠佳的问题。为应对这些局限,我们提出了MLVTG,一个集成了两个关键模块的新颖框架:MambaAligner和LLMRefiner。MambaAligner使用堆叠的Vision Mamba块作为骨干网络,替代Transformer,以建模时序依赖并提取鲁棒的视频表征用于多模态对齐。LLMRefiner利用预训练大语言模型(LLM)的特定冻结层来隐式迁移语义先验,从而在不进行微调的情况下增强多模态对齐。这种双重对齐策略——通过结构化状态空间动力学进行时序建模,以及通过文本先验进行语义净化——实现了更精准的定位。在QVHighlights、Charades-STA和TVSum数据集上进行的大量实验表明,MLVTG取得了最先进的性能,并显著优于现有基线方法。