The acceleration of Large Language Models (LLMs) research has opened up new possibilities for evaluating generated texts. They serve as scalable and economical evaluators, but the question of how reliable these evaluators are has emerged as a crucial research question. Prior research efforts in the meta-evaluation of LLMs as judges limit the prompting of an LLM to a single use to obtain a final evaluation decision. They then compute the agreement between LLMs' outputs and human labels. This lacks interpretability in understanding the evaluation capability of LLMs. In light of this challenge, we propose Decompose and Aggregate, which breaks down the evaluation process into different stages based on pedagogical practices. Our experiments illustrate that it not only provides a more interpretable window for how well LLMs evaluate, but also leads to improvements up to 39.6% for different LLMs on a variety of meta-evaluation benchmarks.
翻译:大型语言模型(LLM)研究的加速发展为生成文本的评估开辟了新的可能性。它们可作为可扩展且经济高效的评估工具,但这些评估器的可靠性如何已成为一个关键的研究问题。先前关于将LLM作为评判者进行元评估的研究,通常将LLM的提示限制在单次使用以获得最终评估决策,随后计算LLM输出与人工标注之间的一致性。这种方法在理解LLM的评估能力方面缺乏可解释性。针对这一挑战,我们提出了“分解与聚合”方法,该方法基于教学实践将评估过程分解为不同阶段。我们的实验表明,该方法不仅为LLM的评估效果提供了更具可解释性的观察窗口,还在多种元评估基准测试中,为不同LLM带来了最高达39.6%的性能提升。