Work on instruction-tuned Large Language Models (LLMs) has used automatic methods based on text overlap and LLM judgments as cost-effective alternatives to human evaluation. In this paper, we perform a meta-evaluation of such methods and assess their reliability across a broad range of tasks. We observe that while automatic evaluation methods can approximate human ratings under specific conditions, their validity is highly context-dependent. Specifically, the simple ROUGE-L metric correlates well with human ratings for short-answer English tasks but is unreliable in free-form generation tasks and cross-lingual transfer. The effectiveness of the more advanced method of using GPT-4 as a judge diminishes significantly if reference answers are not included in the prompt, which is the scenario where this method has the potential to provide the most value compared to other metrics. Our findings enhance the understanding of how automatic methods should be applied and interpreted when developing and evaluating instruction-tuned LLMs.
翻译:指令微调大语言模型(LLMs)的相关研究常采用基于文本重叠和LLM评判的自动方法,作为人工评估的经济有效替代方案。本文对此类方法进行了元评估,并检验了其在广泛任务范围内的可靠性。我们发现,尽管自动评估方法在特定条件下能够近似人类评分,但其有效性高度依赖于具体情境。具体而言,简单的ROUGE-L指标在短答案英语任务中与人类评分相关性良好,但在自由生成任务和跨语言迁移中则不可靠。而使用GPT-4作为评判者的更先进方法,若未在提示中包含参考答案,其有效性会显著下降——而这正是该方法相较于其他指标最具应用潜力的场景。我们的研究结果深化了在开发和评估指令微调LLMs时,应如何应用和解读自动评估方法的理解。