Vision-Language Models (VLMs) excel at visual reasoning but still struggle with integrating external knowledge. Retrieval-Augmented Generation (RAG) is a promising solution, but current methods remain inefficient and often fail to maintain high answer quality. To address these challenges, we propose VideoSpeculateRAG, an efficient VLM-based RAG framework built on two key ideas. First, we introduce a speculative decoding pipeline: a lightweight draft model quickly generates multiple answer candidates, which are then verified and refined by a more accurate heavyweight model, substantially reducing inference latency without sacrificing correctness. Second, we identify a major source of error - incorrect entity recognition in retrieved knowledge - and mitigate it with a simple yet effective similarity-based filtering strategy that improves entity alignment and boosts overall answer accuracy. Experiments demonstrate that VideoSpeculateRAG achieves comparable or higher accuracy than standard RAG approaches while accelerating inference by approximately 2x. Our framework highlights the potential of combining speculative decoding with retrieval-augmented reasoning to enhance efficiency and reliability in complex, knowledge-intensive multimodal tasks.
翻译:视觉-语言模型(VLMs)在视觉推理方面表现出色,但在整合外部知识方面仍存在困难。检索增强生成(RAG)是一种有前景的解决方案,但现有方法效率低下,且往往难以维持较高的答案质量。为应对这些挑战,我们提出了VideoSpeculateRAG,一个基于两个核心思想构建的高效VLM-RAG框架。首先,我们引入了一种推测解码流水线:一个轻量级草稿模型快速生成多个候选答案,随后由一个更精确的重量级模型进行验证与精炼,从而在不牺牲正确性的前提下显著降低推理延迟。其次,我们发现检索知识中实体识别错误是导致误差的主要来源,并提出了一种简单而有效的基于相似性的过滤策略来缓解此问题,该策略改善了实体对齐并提升了整体答案准确率。实验表明,VideoSpeculateRAG在实现与标准RAG方法相当或更高准确率的同时,将推理速度提升了约2倍。我们的框架凸显了将推测解码与检索增强推理相结合,在复杂、知识密集型多模态任务中提升效率与可靠性的潜力。