LLM-as-a-Judge and reward models are widely used alternatives of multiple-choice questions or human annotators for large language model (LLM) evaluation. Their efficacy shines in evaluating long-form responses, serving a critical role as evaluators of leaderboards and as proxies to align LLMs via reinforcement learning. However, despite their popularity, their effectiveness in diverse contexts, such as non-English prompts, factual verification, or challenging questions, remains unexplored. In this paper, we conduct a comprehensive analysis of automated evaluators, reporting several key findings on their behavior. First, we discover that English evaluation capabilities significantly influence language-specific evaluation capabilities, often more than the language proficiency itself, enabling evaluators trained in English to easily transfer their skills to other languages. Second, we identify critical shortcomings, where LLMs fail to detect and penalize errors, such as factual inaccuracies, cultural misrepresentations, and the presence of unwanted language. Finally, we find that state-of-the-art evaluators struggle with challenging prompts, in either English or Korean, underscoring their limitations in assessing or generating complex reasoning questions. We release the dataset and codes used.
翻译:LLM-as-a-Judge 和奖励模型被广泛用作多项选择题或人工标注者的替代方案,用于大型语言模型(LLM)的评估。它们在评估长文本回复方面表现出色,在排行榜评估以及作为通过强化学习对齐LLM的代理中发挥着关键作用。然而,尽管其应用广泛,它们在多样化场景(如非英语提示、事实核查或具有挑战性的问题)中的有效性仍未得到充分探索。本文对自动化评估器进行了全面分析,并报告了关于其行为的若干关键发现。首先,我们发现英语评估能力显著影响特定语言的评估能力,其影响往往超过语言熟练度本身,这使得在英语上训练的评估器能够轻松将其技能迁移到其他语言。其次,我们识别出关键缺陷,即LLM未能检测和惩罚某些错误,例如事实不准确、文化误读以及存在不当语言。最后,我们发现最先进的评估器在处理具有挑战性的提示(无论是英语还是韩语)时存在困难,这突显了它们在评估或生成复杂推理问题方面的局限性。我们已发布所使用的数据集和代码。