The scaling of large language models to encode all the world's knowledge in model parameters is unsustainable and has exacerbated resource barriers. Retrieval-Augmented Generation (RAG) presents a potential solution, yet its application to vision-language models (VLMs) is under explored. Existing methods focus on models designed for single tasks. Furthermore, they're limited by the need for resource intensive pre training, additional parameter requirements, unaddressed modality prioritization and lack of clear benefit over non-retrieval baselines. This paper introduces RAVEN, a multitask retrieval augmented VLM framework that enhances base VLMs through efficient, task specific fine-tuning. By integrating retrieval augmented samples without the need for additional retrieval-specific parameters, we show that the model acquires retrieval properties that are effective across multiple tasks. Our results and extensive ablations across retrieved modalities for the image captioning and VQA tasks indicate significant performance improvements compared to non retrieved baselines +1 CIDEr on MSCOCO, +4 CIDEr on NoCaps and nearly a +3\% accuracy on specific VQA question types. This underscores the efficacy of applying RAG approaches to VLMs, marking a stride toward more efficient and accessible multimodal learning.
翻译:将大型语言模型扩展至将所有世界知识编码于模型参数中的做法是不可持续的,并加剧了资源壁垒。检索增强生成(RAG)提供了一种潜在的解决方案,但其在视觉语言模型(VLM)中的应用尚未得到充分探索。现有方法主要针对为单一任务设计的模型。此外,它们还受到以下限制:需要资源密集的预训练、额外的参数需求、未解决的模态优先级问题,以及相比非检索基线缺乏明确的优势。本文提出了RAVEN,一个多任务检索增强的VLM框架,通过高效、任务特定的微调来增强基础VLM。通过整合检索增强样本而无需额外的检索专用参数,我们证明了该模型能够获得在多个任务中有效的检索特性。我们在图像描述和VQA任务上针对检索模态的实验结果和广泛消融研究表明,相比非检索基线取得了显著的性能提升:在MSCOCO上提升+1 CIDEr,在NoCaps上提升+4 CIDEr,在特定VQA问题类型上准确率提升近+3%。这凸显了将RAG方法应用于VLM的有效性,标志着向更高效、更易访问的多模态学习迈出了一步。