Recent advances in large language models (LLMs) have introduced the novel paradigm of using LLMs as judges, where an LLM evaluates and scores the outputs of another LLM, which often correlates highly with human preferences. However, the use of LLM-as-a-judge has been primarily studied in English. In this paper, we evaluate this framework in Russian by introducing the Russian Error tyPes Annotation dataset (REPA), a dataset of 1k user queries and 2k LLM-generated responses. Human annotators labeled each response pair expressing their preferences across ten specific error types, as well as selecting an overall preference. We rank six generative LLMs across the error types using three rating systems based on human preferences. We also evaluate responses using eight LLM judges in zero-shot and few-shot settings. We describe the results of analyzing the judges and position and length biases. Our findings reveal a notable gap between LLM judge performance in Russian and English. However, rankings based on human and LLM preferences show partial alignment, suggesting that while current LLM judges struggle with fine-grained evaluation in Russian, there is potential for improvement.
翻译:近年来,大型语言模型(LLMs)的发展引入了将LLMs用作评判者的新范式,即由一个LLM评估并给另一个LLM的输出打分,其结果常与人类偏好高度相关。然而,LLM-as-a-judge(以LLM作为评判者)的应用主要集中于英语领域。本文通过引入俄语错误类型标注数据集(REPA)——一个包含1千条用户查询与2千条LLM生成回复的数据集——在俄语环境中评估该框架。人工标注者对每对回复根据十种具体错误类型标注其偏好,并选择整体偏好回复。我们基于人类偏好的三种评分体系,在各类错误类型上对六个生成式LLM进行排序。同时,我们在零样本和少样本设置下使用八个LLM评判者对回复进行评估。我们分析了评判者的表现以及位置与长度偏差,并报告相关结果。研究发现,LLM评判者在俄语与英语环境中的表现存在显著差距。然而,基于人类偏好与LLM偏好的排序呈现部分一致性,这表明尽管当前LLM评判者在俄语的细粒度评估方面存在困难,但仍具改进潜力。