As Large Language Models (LLMs) expand across domains, LLM judges have become essential for systems evaluation. Current benchmarks typically compare system outputs against baselines. This baseline-mediated approach, though convenient, yields lower reliability than direct comparison between systems. We propose Arena-Lite which integrates tournament structure on top of head-to-head comparison. The application of a tournament structure and direct comparison eliminates the need for baseline outputs, reduces the number of required comparisons, and allows higher reliability in system rankings. We conducted two experiments: (1) controlled stochastic modeling and (2) empirical validation with a real LLM judge. Those experiments collectively demonstrate that Arena-Lite consistently achieves higher reliability with fewer comparisons, even with smaller datasets or weaker judges. We release an easy-to-use web demonstration and code to foster adoption of Arena-Lite, streamlining model selection across research and industry communities. Arena-Lite demo and code are available on \href{https://huggingface.co/spaces/NCSOFT/ArenaLite}{https://huggingface.co/spaces/NCSOFT/ArenaLite}
翻译:随着大语言模型(LLMs)在各领域的广泛应用,LLM评判器已成为系统评估的关键工具。现有基准测试通常将系统输出与基线进行对比。这种基于基线的评估方法虽然便捷,但其可靠性低于系统间的直接比较。本文提出Arena-Lite方法,在直接对比的基础上引入锦标赛结构。该锦标赛结构与直接比较的结合,无需基线输出,减少了所需对比次数,并提升了系统排名的可靠性。我们进行了两项实验:(1)受控随机建模;(2)使用真实LLM评判器的实证验证。实验结果表明,即使在小规模数据集或较弱评判器条件下,Arena-Lite仍能以更少的对比次数持续获得更高的可靠性。我们发布了易于使用的网页演示和代码,以促进Arena-Lite的推广应用,为学术界和工业界的模型选择提供高效解决方案。Arena-Lite演示及代码发布于:https://huggingface.co/spaces/NCSOFT/ArenaLite