As the capabilities of large multimodal models (LMMs) continue to advance, evaluating the performance of LMMs emerges as an increasing need. Additionally, there is an even larger gap in evaluating the advanced knowledge and reasoning abilities of LMMs in non-English contexts such as Chinese. We introduce CMMMU, a new Chinese Massive Multi-discipline Multimodal Understanding benchmark designed to evaluate LMMs on tasks demanding college-level subject knowledge and deliberate reasoning in a Chinese context. CMMMU is inspired by and strictly follows the annotation and analysis pattern of MMMU. CMMMU includes 12k manually collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering, like its companion, MMMU. These questions span 30 subjects and comprise 39 highly heterogeneous image types, such as charts, diagrams, maps, tables, music sheets, and chemical structures. CMMMU focuses on complex perception and reasoning with domain-specific knowledge in the Chinese context. We evaluate 11 open-source LLMs and one proprietary GPT-4V(ision). Even GPT-4V only achieves accuracies of 42%, indicating a large space for improvement. CMMMU will boost the community to build the next-generation LMMs towards expert artificial intelligence and promote the democratization of LMMs by providing diverse language contexts.
翻译:随着大型多模态模型(LMMs)能力的持续提升,评估其性能的需求日益迫切。此外,在中文等非英语语境下,评估LMMs的高级知识与推理能力存在更大缺口。我们提出CMMMU——一个新型中文大规模多学科多模态理解基准,旨在评估LMMs在中文语境中完成大学水平学科知识与审慎推理任务的表现。CMMMU受MMMU启发并严格遵循其标注与分析范式,包含从大学考试、测验及教科书中人工收集的1.2万道多模态试题,覆盖六大核心学科:艺术与设计、商业、科学、健康与医学、人文社会科学、技术与工程(与其姊妹基准MMMU一致)。这些试题横跨30个学科,包含39种高度异构的图像类型,如图表、示意图、地图、表格、乐谱及化学结构式。CMMMU聚焦于中文语境下依赖领域知识的复杂感知与推理任务。我们评估了11个开源大语言模型(LLMs)及一个专有模型GPT-4V(ision),即使GPT-4V也仅达到42%的准确率,表明仍有巨大提升空间。CMMMU将推动社区构建面向专家级人工智能的下一代LMMs,并通过提供多样化语言语境促进其民主化发展。