Ensuring that Large Language Models (LLMs) generate summaries faithful to a given source document is essential for real-world applications. While prior research has explored LLM faithfulness, existing benchmarks suffer from annotation ambiguity, primarily due to the ill-defined boundary of permissible external knowledge in generated outputs. For instance, common sense is often incorporated into responses and labeled as "faithful", yet the acceptable extent of such knowledge remains unspecified, leading to inconsistent annotations. To address this issue, we propose a novel faithfulness annotation framework, which introduces an intermediate category, Out-Dependent, to classify cases where external knowledge is required for verification. Using this framework, we construct VeriGray (Verification with the Gray Zone) -- a new unfaithfulness detection benchmark in summarization. Statistics reveal that even SOTA LLMs, such as GPT-5, exhibit hallucinations ($\sim 6\%$ of sentences) in summarization tasks. Moreover, a substantial proportion ($\sim 9\%$ on average of models) of generated sentences fall into the Out-Dependent category, underscoring the importance of resolving annotation ambiguity in unfaithfulness detection benchmarks. Experiments demonstrate that our benchmark poses significant challenges to multiple baseline methods, indicating considerable room for future improvement.
翻译:确保大型语言模型(LLM)生成的摘要忠实于给定的源文档对于实际应用至关重要。尽管先前的研究已探讨了LLM的忠实性,但现有基准测试存在标注模糊性问题,这主要源于生成输出中允许使用的外部知识边界定义不清。例如,常识常被纳入响应并标记为“忠实”,但此类知识的可接受范围仍未明确,导致标注结果不一致。为解决此问题,我们提出了一种新颖的忠实性标注框架,引入了一个中间类别——外部依赖型(Out-Dependent),用于分类需要借助外部知识进行验证的情况。基于此框架,我们构建了VeriGray(灰色地带验证)——一个全新的摘要不忠实检测基准。统计数据显示,即使是GPT-5等最先进的LLM,在摘要任务中仍存在幻觉现象(约6%的句子)。此外,生成句子中有相当大比例(模型平均约9%)属于外部依赖型类别,这凸显了在不忠实检测基准中解决标注模糊性的重要性。实验表明,我们的基准对多种基线方法构成了显著挑战,预示着未来仍有巨大的改进空间。