Alignment approaches such as RLHF and DPO are actively investigated to align large language models (LLMs) with human preferences. Commercial large language models (LLMs) like GPT-4 have been recently employed to evaluate and compare different LLM alignment approaches. These models act as surrogates for human evaluators due to their promising abilities to approximate human preferences with remarkably faster feedback and lower costs. This methodology is referred to as LLM-as-a-judge. However, concerns regarding its reliability have emerged, attributed to LLM judges' biases and inconsistent decision-making. Previous research has sought to develop robust evaluation frameworks for assessing the reliability of LLM judges and their alignment with human preferences. However, the employed evaluation metrics often lack adequate explainability and fail to address the internal inconsistency of LLMs. Additionally, existing studies inadequately explore the impact of various prompt templates when applying LLM-as-a-judge methods, which leads to potentially inconsistent comparisons between different alignment algorithms. In this work, we systematically evaluate LLM judges on alignment tasks (e.g. summarization) by defining evaluation metrics with improved theoretical interpretability and disentangling reliability metrics with LLM internal inconsistency. We develop a framework to evaluate, compare, and visualize the reliability and alignment of LLM judges to provide informative observations that help choose LLM judges for alignment tasks. Our results indicate a significant impact of prompt templates on LLM judge performance, as well as a mediocre alignment level between the tested LLM judges and human evaluators.
翻译:为将大型语言模型(LLM)与人类偏好对齐,RLHF和DPO等对齐方法正被积极研究。近期,诸如GPT-4等商用大型语言模型(LLM)已被用于评估和比较不同的LLM对齐方法。这些模型因其能以显著更快的反馈速度和更低的成本近似人类偏好,被用作人类评估者的替代,该方法被称为“LLM-as-a-judge”。然而,由于LLM裁判存在的偏见和决策不一致性,其可靠性引发了担忧。先前的研究致力于构建稳健的评估框架,以评估LLM裁判的可靠性及其与人类偏好的对齐程度。然而,所采用的评估指标往往缺乏足够的可解释性,且未能解决LLM的内部不一致性问题。此外,现有研究未能充分探讨在应用LLM-as-a-judge方法时不同提示模板的影响,这可能导致不同对齐算法之间的比较存在潜在的不一致性。在本工作中,我们通过定义具有改进理论可解释性的评估指标,并将可靠性指标与LLM内部不一致性进行解耦,系统评估了LLM裁判在对齐任务(如摘要生成)中的表现。我们开发了一个框架,用于评估、比较和可视化LLM裁判的可靠性和对齐程度,以提供有助于为对齐任务选择LLM裁判的信息化观察。我们的结果表明,提示模板对LLM裁判的性能有显著影响,且所测试的LLM裁判与人类评估者之间的对齐水平一般。