The rise of large language models (LLMs) has brought a critical need for high-quality human-labeled data, particularly for processes like human feedback and evaluation. A common practice is to label data via consensus annotation over human judgments. However, annotators' judgments for subjective tasks can differ in many ways: they may reflect different qualitative judgments about an example, and they may be mapped to a labeling scheme in different ways. We show that these nuances can be captured by natural language explanations, and propose a method to rescale ordinal annotations and explanations using LLMs. Specifically, we feed annotators' Likert ratings and corresponding explanations into an LLM and prompt it to produce a numeric score anchored in a scoring rubric. These scores should reflect the annotators' underlying assessments of the example. The rubric can be designed or modified after annotation, and include distinctions that may not have been known when the original error taxonomy was devised. We explore our technique in the context of rating system outputs for a document-grounded question answering task, where LLMs achieve near-human performance. Our method rescales the raw judgments without impacting agreement and brings the scores closer to human judgments grounded in the same scoring rubric.
翻译:大型语言模型(LLMs)的兴起带来了对高质量人工标注数据的迫切需求,特别是在人类反馈与评估等流程中。当前普遍的做法是通过人类判断的共识标注来标记数据。然而,针对主观任务,标注者的判断可能在多方面存在差异:它们可能反映对样本的不同定性判断,也可能以不同方式映射到标注体系。我们证明这些细微差异可以通过自然语言解释来捕捉,并提出一种利用LLMs重标度序数标注及其解释的方法。具体而言,我们将标注者的李克特评分及相应解释输入LLM,提示其生成基于评分量规的数值分数。这些分数应反映标注者对样本的潜在评估。量规可在标注后设计或修改,并包含原始错误分类体系设计时可能未知的区分维度。我们在文档检索问答任务中评估系统输出的评分场景下验证本方法,该场景中LLMs已达到接近人类的性能水平。我们的方法在不影响一致性的前提下重标度原始判断,并使分数更接近基于相同评分量规的人类判断。