Multimodal LLMs (MLLMs) are the natural extension of large language models to handle multimodal inputs, combining text and image data. They have recently garnered attention due to their capability to address complex tasks involving both modalities. However, their effectiveness is limited to the knowledge acquired during training, which restricts their practical utility. In this work, we introduce a novel method to enhance the adaptability of MLLMs by integrating external knowledge sources. Our proposed model, Reflective LLaVA (ReflectiVA), utilizes reflective tokens to dynamically determine the need for external knowledge and predict the relevance of information retrieved from an external database. Tokens are trained following a two-stage two-model training recipe. This ultimately enables the MLLM to manage external knowledge while preserving fluency and performance on tasks where external knowledge is not needed. Through our experiments, we demonstrate the efficacy of ReflectiVA for knowledge-based visual question answering, highlighting its superior performance compared to existing methods. Source code and trained models are publicly available at https://github.com/aimagelab/ReflectiVA.
翻译:多模态大语言模型(MLLMs)是将大语言模型自然扩展以处理多模态输入(结合文本和图像数据)的模型。由于它们能够处理涉及两种模态的复杂任务,近来备受关注。然而,其有效性受限于训练期间获取的知识,这制约了其实际应用。在本工作中,我们提出了一种新方法,通过整合外部知识源来增强MLLMs的适应性。我们提出的模型,反思型LLaVA(ReflectiVA),利用反思令牌来动态判断是否需要外部知识,并预测从外部数据库检索到的信息的相关性。令牌的训练遵循一个两阶段双模型的训练方案。这最终使得MLLM能够管理外部知识,同时在不需要外部知识的任务上保持流畅性和性能。通过实验,我们证明了ReflectiVA在知识驱动的视觉问答任务中的有效性,突显了其相较于现有方法的优越性能。源代码及训练模型公开于 https://github.com/aimagelab/ReflectiVA。