Existing large video-language models (LVLMs) struggle to comprehend long videos correctly due to limited context. To address this problem, fine-tuning long-context LVLMs and employing GPT-based agents have emerged as promising solutions. However, fine-tuning LVLMs would require extensive high-quality data and substantial GPU resources, while GPT-based agents would rely on proprietary models (e.g., GPT-4o). In this paper, we propose Video Retrieval-Augmented Generation (Video-RAG), a training-free and cost-effective pipeline that employs visually-aligned auxiliary texts to help facilitate cross-modality alignment while providing additional information beyond the visual content. Specifically, we leverage open-source external tools to extract visually-aligned information from pure video data (e.g., audio, optical character, and object detection), and incorporate the extracted information into an existing LVLM as auxiliary texts, alongside video frames and queries, in a plug-and-play manner. Our Video-RAG offers several key advantages: (i) lightweight with low computing overhead due to single-turn retrieval; (ii) easy implementation and compatibility with any LVLM; and (iii) significant, consistent performance gains across long video understanding benchmarks, including Video-MME, MLVU, and LongVideoBench. Notably, our model demonstrates superior performance over proprietary models like Gemini-1.5-Pro and GPT-4o when utilized with a 72B model.
翻译:现有的大型视频语言模型(LVLM)由于上下文长度有限,难以准确理解长视频。为解决此问题,微调长上下文LVLM与采用基于GPT的智能体已成为有前景的解决方案。然而,微调LVLM需要大量高质量数据与可观的GPU资源,而基于GPT的智能体则需依赖闭源模型(如GPT-4o)。本文提出视频检索增强生成(Video-RAG),一种无需训练且成本高效的流程,其利用视觉对齐的辅助文本来促进跨模态对齐,同时提供超越视觉内容的附加信息。具体而言,我们借助开源外部工具从纯视频数据(如音频、光学字符与目标检测)中提取视觉对齐信息,并以即插即用方式将提取的信息作为辅助文本,与视频帧及查询一同输入现有LVLM。我们的Video-RAG具有以下关键优势:(i)因采用单轮检索而计算开销低、轻量化;(ii)易于实现且兼容任意LVLM;(iii)在长视频理解基准(包括Video-MME、MLVU与LongVideoBench)上取得显著且一致的性能提升。值得注意的是,当配合720亿参数模型使用时,我们的模型性能优于Gemini-1.5-Pro与GPT-4o等闭源模型。