Question answering, asking, and assessment are three innate human traits crucial for understanding the world and acquiring knowledge. By enhancing these capabilities, humans can more effectively utilize data, leading to better comprehension and learning outcomes. Current Multimodal Large Language Models (MLLMs) primarily focus on question answering, often neglecting the full potential of questioning and assessment skills. Inspired by the human learning mechanism, we introduce LOVA3, an innovative framework named "Learning tO Visual question Answering, Asking and Assessment," designed to equip MLLMs with these additional capabilities. Our approach involves the creation of two supplementary training tasks GenQA and EvalQA, aiming at fostering the skills of asking and assessing questions in the context of images. To develop the questioning ability, we compile a comprehensive set of multimodal foundational tasks. For assessment, we introduce a new benchmark called EvalQABench, comprising 64,000 training samples (split evenly between positive and negative samples) and 5,000 validation and testing samples. We posit that enhancing MLLMs with the capabilities to answer, ask, and assess questions will enhance their multimodal comprehension, ultimately improving overall performance. To validate this hypothesis, we train MLLMs using the LOVA3 framework and evaluate them on a range of multimodal datasets and benchmarks. Our results demonstrate consistent performance gains, underscoring the critical role of these additional tasks in fostering comprehensive intelligence in MLLMs. The code is available at https://github.com/showlab/LOVA3.
翻译:问答、提问与评估是人类理解世界与获取知识的三种内在核心能力。通过提升这些能力,人类能够更有效地利用数据,从而获得更好的理解与学习效果。当前的多模态大语言模型主要聚焦于问答任务,往往未能充分发挥提问与评估技能的潜力。受人类学习机制的启发,我们提出了LOVA3这一创新框架,全称为“面向视觉问答、提问与评估的学习”,旨在为多模态大语言模型赋予这些额外能力。我们的方法包含创建两个辅助训练任务GenQA与EvalQA,以培养模型在图像语境下的提问与问题评估技能。为发展提问能力,我们整合了一套全面的多模态基础任务集。针对评估能力,我们引入了名为EvalQABench的新基准数据集,其中包含64,000个训练样本(正负样本各半)以及5,000个验证与测试样本。我们认为,增强多模态大语言模型的问答、提问与评估能力将提升其多模态理解水平,最终改善整体性能。为验证这一假设,我们使用LOVA3框架训练多模态大语言模型,并在一系列多模态数据集与基准测试上进行评估。实验结果表明模型性能获得持续提升,凸显了这些辅助任务对于培养多模态大语言模型全面智能的关键作用。代码已开源:https://github.com/showlab/LOVA3。