As Natural Language Generation (NLG) continues to be widely adopted, properly assessing it has become quite difficult. Lately, using large language models (LLMs) for evaluating these generations has gained traction, as they tend to align more closely with human preferences than conventional n-gram or embedding-based metrics. In our experiments, we show that LLM judges have low intra-rater reliability in their assigned scores across different runs. This variance makes their ratings inconsistent, almost arbitrary in the worst case, making it difficult to measure how good their judgments actually are. We quantify this inconsistency across different NLG tasks and benchmarks and see if judicious use of LLM judges can still be useful following proper guidelines.
翻译:随着自然语言生成(NLG)技术的广泛应用,对其准确评估变得日益困难。近年来,使用大型语言模型(LLM)评估生成文本的方法逐渐受到关注,因为相较于传统的n-gram或基于嵌入的指标,LLM的评判往往更符合人类偏好。我们的实验表明,LLM评判者在多次运行中对同一内容给出的评分存在较低的内部评分者信度。这种差异导致其评分结果不一致,在最坏情况下近乎随机,从而难以衡量其判断的实际质量。我们量化了这种不一致性在不同NLG任务和基准测试中的表现,并探讨在遵循适当指导原则的前提下,审慎使用LLM评判是否仍具实用价值。