We present TMMLU+, a new benchmark designed for Traditional Chinese language understanding. TMMLU+ is a multi-choice question-answering dataset with 66 subjects from elementary to professional level. It is six times larger and boasts a more balanced subject distribution than its predecessor, Taiwan Massive Multitask Language Understanding (TMMLU). We also benchmark closed-source models and 26 open-weight Chinese large language models (LLMs) of parameters ranging from 1.8B to 72B on the proposed TMMLU+. Our findings reveal that (1.) Traditional Chinese models still trail behind their Simplified Chinese counterparts, highlighting a need for more focused advancements in LLMs catering to Traditional Chinese. (2.) Current LLMs still fall short of human performance in average scores, indicating a potential need for future research to delve deeper into social science and humanities subjects. (3.) Among all the tokenization compression metrics examined, we identify that only the fertility score uniquely demonstrates strong correlations with our benchmark results. We foresee that TMMLU+ will pinpoint areas for future model improvement, thereby narrowing the gap between machine and human linguistic capabilities and supporting researchers in developing Traditional Chinese LLMs. Our dataset, along with the benchmark source code, is accessible at huggingface.co/datasets/ikala/tmmluplus.
翻译:我们提出了TMMLU+,一个专为繁体中文语言理解设计的新基准。TMMLU+是一个包含从小学到专业水平共66个学科的多选题问答数据集。其规模是其前身台湾大规模多任务语言理解(TMMLU)的六倍,并且学科分布更为均衡。我们还在所提出的TMMLU+上对闭源模型以及26个参数量从1.8B到72B的开源权重中文大语言模型进行了基准测试。我们的研究结果表明:(1)繁体中文模型的表现仍然落后于其简体中文对应模型,这突显了需要针对服务于繁体中文的LLMs进行更集中的技术提升。(2)当前LLMs在平均得分上仍不及人类表现,这表明未来的研究可能需要更深入地探索社会科学和人文学科。(3)在所有考察的分词压缩指标中,我们发现仅有生育力分数与我们的基准测试结果表现出独特且强烈的相关性。我们预见TMMLU+将能指明未来模型改进的方向,从而缩小机器与人类语言能力之间的差距,并支持研究人员开发繁体中文LLMs。我们的数据集以及基准测试源代码可在 huggingface.co/datasets/ikala/tmmluplus 获取。