Work on instruction-tuned Large Language Models (LLMs) has used automatic methods based on text overlap and LLM judgments as cost-effective alternatives to human evaluation. In this paper, we perform a meta-evaluation of such methods and assess their reliability across a broad range of tasks. We observe that while automatic evaluation methods can approximate human ratings under specific conditions, their validity is highly context-dependent. Specifically, the simple ROUGE-L metric correlates well with human ratings for short-answer English tasks but is unreliable in free-form generation tasks and cross-lingual transfer. The effectiveness of the more advanced method of using GPT-4 as a judge diminishes significantly if reference answers are not included in the prompt, which is the scenario where this method has the potential to provide the most value compared to other metrics. Our findings enhance the understanding of how automatic methods should be applied and interpreted when developing and evaluating instruction-tuned LLMs.
翻译:指令调优大语言模型(LLMs)的研究工作常采用基于文本重叠和LLM判断的自动评估方法,作为人工评估的经济有效替代方案。本文对此类方法进行了元评估,并在一系列广泛任务中检验了其可靠性。我们发现,虽然自动评估方法在特定条件下可以近似人类评分,但其有效性高度依赖于具体情境。具体而言,简单的ROUGE-L指标在简短答案类英语任务中与人类评分相关性良好,但在自由生成任务和跨语言迁移场景中则不可靠。而使用GPT-4作为评判者的更先进方法,若不在提示中包含参考答案,其有效性会显著下降——而这正是该方法相较于其他指标最具应用潜力的场景。我们的研究结果深化了对自动评估方法在开发和评估指令调优LLMs时应如何应用与解读的理解。