We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models.
翻译:我们提出MM-Vet,这是一个用于评估大型多模态模型(LMMs)在复杂多模态任务上表现的基准测试。近期的大型多模态模型已展现出多种引人注目的能力,例如解答黑板上的数学问题、推理新闻图像中的事件与名人、以及解释视觉笑话。模型的快速演进对评估基准的开发提出了挑战,主要问题包括:(1)如何系统化构建并评估复杂的多模态任务;(2)如何设计能适应不同问答类型的评估指标;(3)如何提供超越简单性能排名的模型深度洞察。为此,我们提出了MM-Vet,其设计基于以下洞见:解决复杂任务的关键能力往往源于通用模型能够整合不同的核心视觉-语言(VL)能力。MM-Vet定义了6项核心VL能力,并检验由这些能力组合衍生出的16种关键整合形式。在评估指标方面,我们提出了一种基于大语言模型的开放式答案评估器。该评估器能够跨不同问题类型与答案风格进行统一评分。我们在MM-Vet上对代表性大型多模态模型进行了评估,从而为不同LMM系统范式及模型的能力特性提供了深入洞察。