Rating-based human evaluation has become an essential tool to accurately evaluate the impressive performance of large language models (LLMs). However, current rating systems suffer from several important limitations: first, they fail to account for biases that significantly influence evaluation results, second, they require large and expensive preference datasets to obtain accurate ratings, and third, they do not facilitate meaningful comparisons of model ratings across different tasks. To address these issues, we introduce Polyrating, an expressive and flexible rating system based on maximum a posteriori estimation that enables a more nuanced and thorough analysis of model performance at lower costs. Polyrating can detect and quantify biases affecting human preferences, ensuring fairer model comparisons. Further, Polyrating can reduce the cost of human evaluations by up to $41\%$ for new models and up to $77\%$ for new tasks by leveraging existing benchmark scores. Lastly, Polyrating enables direct comparisons of ratings across different tasks, providing a comprehensive understanding of an LLMs' strengths, weaknesses, and relative performance across different applications.
翻译:基于评分的人工评估已成为准确评估大语言模型(LLMs)卓越性能的重要工具。然而,现有评级系统存在若干关键局限:首先,它们未能考虑对评估结果产生显著影响的偏差;其次,需要庞大且昂贵的偏好数据集才能获得准确评分;再者,无法实现跨不同任务的模型评分间的有效比较。为解决这些问题,我们提出了Polyrating——一种基于最大后验估计的表达力强且灵活的评级系统,能够以更低成本实现对模型性能更细致、更全面的分析。Polyrating能够检测并量化影响人类偏好的偏差,确保更公平的模型比较。此外,通过利用现有基准分数,Polyrating可将新模型的人工评估成本降低高达41%,将新任务的评估成本降低高达77%。最后,Polyrating支持跨不同任务的评分直接比较,从而全面理解大语言模型在不同应用场景中的优势、劣势及相对性能。