LLMs can help humans working with long documents, but are known to hallucinate. Attribution can increase trust in LLM responses: The LLM provides evidence that supports its response, which enhances verifiability. Existing approaches to attribution have only been evaluated in RAG settings, where the initial retrieval confounds LLM performance. This is crucially different from the long document setting, where retrieval is not needed, but could help. Thus, a long document specific evaluation of attribution is missing. To fill this gap, we present LAB, a benchmark of 6 diverse long document tasks with attribution, and experiment with different approaches to attribution on 4 LLMs of different sizes, both prompted and fine-tuned. We find that citation, i.e. response generation and evidence extraction in one step, mostly performs best. We investigate whether the ``Lost in the Middle'' phenomenon exists for attribution, but do not find this. We also find that evidence quality can predict response quality on datasets with simple responses, but not so for complex responses, as models struggle with providing evidence for complex claims. We release code and data for further investigation.
翻译:大型语言模型(LLMs)能够协助人类处理长文档,但已知存在幻觉问题。属性归因可增强对LLM响应的信任度:LLM提供支持其回答的证据,从而提升可验证性。现有属性归因方法仅在RAG(检索增强生成)场景中进行评估,其中初始检索过程会干扰LLM性能表现。这与长文档场景存在关键差异——该场景虽无需检索但可能受益于此。因此,目前缺乏针对长文档的属性归因专项评估。为填补这一空白,我们提出LAB基准测试,涵盖6个具有属性归因需求的多样化长文档任务,并在4种不同规模的LLM上通过提示工程与微调方法对多种属性归因方案进行实验。研究发现,单步执行的引用机制(即响应生成与证据提取同步完成)在多数情况下表现最优。我们探究了属性归因中是否存在"中间迷失"现象,但未发现相关证据。同时发现,在响应简单的数据集中证据质量可预测响应质量,但对于复杂响应则不然,因为模型难以提供复杂主张的支撑证据。我们已公开代码与数据以供进一步研究。