Recent advancements in large language models (LLMs) showcase varied multilingual capabilities across tasks like translation, code generation, and reasoning. Previous assessments often limited their scope to fundamental natural language processing (NLP) or isolated capability-specific tasks. To alleviate this drawback, we aim to present a comprehensive multilingual multitask benchmark. First, we present a pipeline for selecting available and reasonable benchmarks from massive ones, addressing the oversight in previous work regarding the utility of these benchmarks, i.e., their ability to differentiate between models being evaluated. Leveraging this pipeline, we introduce P-MMEval, a large-scale benchmark covering effective fundamental and capability-specialized datasets. Furthermore, P-MMEval delivers consistent language coverage across various datasets and provides parallel samples. Finally, we conduct extensive experiments on representative multilingual model series to compare performances across models, analyze dataset effectiveness, examine prompt impacts on model performances, and explore the relationship between multilingual performances and factors such as tasks, model sizes, and languages. These insights offer valuable guidance for future research. The dataset is available at https://huggingface.co/datasets/Qwen/P-MMEval.
翻译:近年来,大语言模型(LLMs)在多语言能力方面展现出多样化进展,涵盖翻译、代码生成和推理等任务。以往的评估通常局限于基础自然语言处理(NLP)或孤立的特定能力任务。为弥补这一不足,我们致力于构建一个全面的多语言多任务基准。首先,我们提出一个从海量基准中筛选可用且合理基准的流程,以解决先前工作中对这些基准效用(即其区分被评估模型的能力)的忽视问题。基于此流程,我们推出了P-MMEval——一个涵盖有效基础数据集与能力专项数据集的大规模基准。此外,P-MMEval在不同数据集间提供一致的语言覆盖,并包含并行样本。最后,我们在代表性多语言模型系列上进行了广泛实验,以比较模型性能、分析数据集有效性、检验提示对模型表现的影响,并探究多语言性能与任务、模型规模及语言等因素之间的关系。这些见解为未来研究提供了宝贵指导。数据集发布于 https://huggingface.co/datasets/Qwen/P-MMEval。