Recently, Large Vision Language Models (LVLMs) have unlocked many complex use cases that require Multi-Modal (MM) understanding (e.g., image captioning or visual question answering) and MM generation (e.g., text-guided image generation or editing) capabilities. To further improve the output fidelityof LVLMs we introduce UniRAG, a plug-and-play technique that adds relevant retrieved information to prompts as few-shot examples during inference. Unlike the common belief that Retrieval Augmentation (RA) mainly improves generation or understanding of uncommon entities, our evaluation results on the MSCOCO dataset with common entities show that both proprietary models like GPT-4o and Gemini-Pro and smaller open-source models like LLaVA, LaVIT, and Emu2 significantly enhance their generation quality when their input prompts are augmented with relevant information retrieved by Vision-Language (VL) retrievers like UniIR models. All the necessary code to reproduce our results is available at https://github.com/castorini/UniRAG
翻译:近年来,大型视觉语言模型(LVLMs)已解锁了许多需要多模态(MM)理解(例如,图像描述或视觉问答)和MM生成(例如,文本引导的图像生成或编辑)能力的复杂应用场景。为了进一步提升LVLMs的输出保真度,我们提出了UniRAG,一种即插即用的技术,它在推理过程中将检索到的相关信息作为少样本示例添加到提示中。与普遍认为检索增强(RA)主要提升对不常见实体的生成或理解的观点不同,我们在包含常见实体的MSCOCO数据集上的评估结果表明,无论是专有模型如GPT-4o和Gemini-Pro,还是较小的开源模型如LLaVA、LaVIT和Emu2,当它们的输入提示通过视觉语言(VL)检索器(如UniIR模型)检索到的相关信息进行增强时,其生成质量均得到显著提升。用于复现我们结果的所有必要代码均可在 https://github.com/castorini/UniRAG 获取。