Existing large video-language models (LVLMs) struggle to comprehend long videos correctly due to limited context. To address this problem, fine-tuning long-context LVLMs and employing GPT-based agents have emerged as promising solutions. However, fine-tuning LVLMs would require extensive high-quality data and substantial GPU resources, while GPT-based agents would rely on proprietary models (e.g., GPT-4o). In this paper, we propose Video Retrieval-Augmented Generation (Video-RAG), a training-free and cost-effective pipeline that employs visually-aligned auxiliary texts to help facilitate cross-modality alignment while providing additional information beyond the visual content. Specifically, we leverage open-source external tools to extract visually-aligned information from pure video data (e.g., audio, optical character, and object detection), and incorporate the extracted information into an existing LVLM as auxiliary texts, alongside video frames and queries, in a plug-and-play manner. Our Video-RAG offers several key advantages: (i) lightweight with low computing overhead due to single-turn retrieval; (ii) easy implementation and compatibility with any LVLM; and (iii) significant, consistent performance gains across long video understanding benchmarks, including Video-MME, MLVU, and LongVideoBench. Notably, our model demonstrates superior performance over proprietary models like Gemini-1.5-Pro and GPT-4o when utilized with a 72B model.
翻译:现有的大型视频语言模型(LVLM)由于上下文长度有限,难以准确理解长视频。为解决此问题,微调长上下文LVLM与采用基于GPT的智能体已成为两种前景广阔的方案。然而,微调LVLM需要大量高质量数据和可观的GPU资源,而基于GPT的智能体则依赖于闭源模型(如GPT-4o)。本文提出视频检索增强生成(Video-RAG),这是一种无需训练且成本效益高的流程,它利用视觉对齐的辅助文本来促进跨模态对齐,同时提供超越视觉内容的额外信息。具体而言,我们利用开源外部工具从纯视频数据(如音频、光学字符和物体检测)中提取视觉对齐信息,并以即插即用的方式将提取的信息作为辅助文本,与视频帧和查询一同输入现有LVLM。我们的Video-RAG具有以下关键优势:(i)由于采用单轮检索机制,计算开销低且轻量化;(ii)易于实现且兼容任何LVLM;(iii)在包括Video-MME、MLVU和LongVideoBench在内的长视频理解基准测试中均取得显著且一致的性能提升。值得注意的是,当与72B参数模型结合使用时,我们的模型性能超越了Gemini-1.5-Pro和GPT-4o等闭源模型。