As research in large language models (LLMs) continues to accelerate, LLM-based evaluation has emerged as a scalable and cost-effective alternative to human evaluations for comparing the ever increasing list of models. This paper investigates the efficacy of these ``LLM evaluators'', particularly in using them to assess instruction following, a metric that gauges how closely generated text adheres to the given instruction. We introduce a challenging meta-evaluation benchmark, LLMBar, designed to test the ability of an LLM evaluator in discerning instruction-following outputs. The authors manually curated 419 pairs of outputs, one adhering to instructions while the other diverging, yet may possess deceptive qualities that mislead an LLM evaluator, e.g., a more engaging tone. Contrary to existing meta-evaluation, we discover that different evaluators (i.e., combinations of LLMs and prompts) exhibit distinct performance on LLMBar and even the highest-scoring ones have substantial room for improvement. We also present a novel suite of prompting strategies that further close the gap between LLM and human evaluators. With LLMBar, we hope to offer more insight into LLM evaluators and foster future research in developing better instruction-following models.
翻译:随着大语言模型(LLM)研究的持续加速,基于LLM的评估已成为一种可扩展且经济高效的人工评估替代方案,用于比较日益增多的模型。本文探讨了这些"LLM评估器"的有效性,特别是将其用于评估指令遵循——一项衡量生成文本与给定指令吻合程度的指标。我们引入了一个具有挑战性的元评估基准LLMBar,旨在测试LLM评估器识别指令遵循输出的能力。作者手动整理了419对输出,其中一对遵循指令而另一对偏离指令,但偏离样本可能具有误导性特征(如更具吸引力的语气),从而干扰LLM评估器的判断。与现有元评估结论不同,我们发现不同评估器(即LLM与提示词的组合)在LLMBar上表现出显著差异,即使得分最高的评估器仍有较大改进空间。我们还提出了一套新颖的提示策略,进一步缩小了LLM与人类评估器之间的差距。借助LLMBar,我们期望为LLM评估器提供更深入的见解,并促进未来开发更优指令遵循模型的研究。