As large language models (LLMs) have grown in prevalence, particular benchmarks have become essential for the evaluation of these models and for understanding model capabilities. Most commonly, we use test accuracy averaged across multiple subtasks in order to rank models on leaderboards, to determine which model is best for our purposes. In this paper, we investigate the robustness of the accuracy measurement on a widely used multiple choice question answering dataset, MMLU. When shuffling the answer label contents, we find that all explored models decrease in accuracy on MMLU, but not every model is equally sensitive. These findings suggest a possible adjustment to the standard practice of leaderboard testing, where we additionally consider the percentage of examples each model answers correctly by random chance.
翻译:随着大语言模型(LLMs)的广泛应用,特定基准测试已成为评估这些模型及理解其能力的关键工具。通常,我们通过计算模型在多个子任务上的平均测试准确率,在排行榜中对模型进行排序,以确定最适合我们需求的模型。本文研究了广泛使用的多项选择题答题数据集MMLU上准确率测量的鲁棒性。当对答案标签内容进行随机重排时,我们发现所有被考察模型在MMLU上的准确率均出现下降,但不同模型对顺序变化的敏感程度存在差异。这些发现提示我们可能需要对排行榜测试的标准实践进行调整,即额外考虑每个模型通过随机猜测正确作答的样本比例。