Large Language Model (LLM) leaderboards based on benchmark rankings are regularly used to guide practitioners in model selection. Often, the published leaderboard rankings are taken at face value - we show this is a (potentially costly) mistake. Under existing leaderboards, the relative performance of LLMs is highly sensitive to (often minute) details. We show that for popular multiple-choice question benchmarks (e.g., MMLU), minor perturbations to the benchmark, such as changing the order of choices or the method of answer selection, result in changes in rankings up to 8 positions. We explain this phenomenon by conducting systematic experiments over three broad categories of benchmark perturbations and identifying the sources of this behavior. Our analysis results in several best-practice recommendations, including the advantage of a hybrid scoring method for answer selection. Our study highlights the dangers of relying on simple benchmark evaluations and charts the path for more robust evaluation schemes on the existing benchmarks. The code for this paper is available at https://github.com/National-Center-for-AI-Saudi-Arabia/lm-evaluation-harness.
翻译:基于基准排名的大型语言模型(LLM)排行榜常被从业者用于指导模型选择。通常,已发布的排行榜排名被直接采信——我们证明这是一个(可能代价高昂的)错误。在现有排行榜体系下,大型语言模型的相对性能对(通常是微小的)细节高度敏感。我们证明,对于流行的多项选择题基准(例如MMLU),对基准进行微小扰动,例如改变选项顺序或答案选择方法,会导致排名变化高达8个位次。我们通过系统性地对三大类基准扰动进行实验并识别其行为根源来解释这一现象。我们的分析得出了若干最佳实践建议,包括采用混合评分方法进行答案选择的优势。本研究强调了依赖简单基准评估的风险,并为在现有基准上构建更稳健的评估方案指明了路径。本文代码可在 https://github.com/National-Center-for-AI-Saudi-Arabia/lm-evaluation-harness 获取。