The pursuit of leaderboard rankings in Large Language Models (LLMs) has created a fundamental paradox: models excel at standardized tests while failing to demonstrate genuine language understanding and adaptability. Our systematic analysis of NLP evaluation frameworks reveals pervasive vulnerabilities across the evaluation spectrum, from basic metrics to complex benchmarks like GLUE and MMLU. These vulnerabilities manifest through benchmark exploitation, dataset contamination, and evaluation bias, creating a false perception of progress in language understanding capabilities. Through extensive review of contemporary evaluation approaches, we identify significant limitations in static benchmark designs, human evaluation protocols, and LLM-as-judge frameworks, all of which compromise the reliability of current performance assessments. As LLM capabilities evolve and existing benchmarks become redundant, we lay the groundwork for new evaluation methods that resist manipulation, minimize data contamination, and assess domain-specific tasks. This requires frameworks that are adapted dynamically, addressing current limitations and providing a more accurate reflection of LLM performance.
翻译:追求大语言模型(LLMs)排行榜排名已造成一个根本性悖论:模型在标准化测试中表现出色,却未能展现真正的语言理解与适应能力。我们对自然语言处理评估框架的系统性分析揭示了评估体系普遍存在的脆弱性,从基础指标到GLUE、MMLU等复杂基准测试均未能幸免。这些脆弱性通过基准测试漏洞利用、数据集污染和评估偏见显现,造成了语言理解能力进步的虚假表象。通过对当代评估方法的全面审视,我们发现静态基准设计、人工评估协议和LLM-as-judge框架均存在显著局限性,这些因素共同损害了当前性能评估的可靠性。随着大语言模型能力的演进和现有基准测试逐渐失效,我们为新型评估方法奠定了基础,这些方法应具备抗操纵性、最小化数据污染并能评估领域特定任务。这需要动态适应的评估框架,以解决当前局限,更准确地反映大语言模型的真实性能。