Ensuring the verifiability of model answers is a fundamental challenge for retrieval-augmented generation (RAG) in the question answering (QA) domain. Recently, self-citation prompting was proposed to make large language models (LLMs) generate citations to supporting documents along with their answers. However, self-citing LLMs often struggle to match the required format, refer to non-existent sources, and fail to faithfully reflect LLMs' context usage throughout the generation. In this work, we present MIRAGE --Model Internals-based RAG Explanations -- a plug-and-play approach using model internals for faithful answer attribution in RAG applications. MIRAGE detects context-sensitive answer tokens and pairs them with retrieved documents contributing to their prediction via saliency methods. We evaluate our proposed approach on a multilingual extractive QA dataset, finding high agreement with human answer attribution. On open-ended QA, MIRAGE achieves citation quality and efficiency comparable to self-citation while also allowing for a finer-grained control of attribution parameters. Our qualitative evaluation highlights the faithfulness of MIRAGE's attributions and underscores the promising application of model internals for RAG answer attribution.
翻译:确保模型答案的可验证性是检索增强生成(RAG)在问答(QA)领域面临的一个根本性挑战。近期,有研究提出自引用提示方法,使大语言模型(LLM)在生成答案的同时,能够引用支持性文档。然而,自引用的LLM往往难以匹配所需的引用格式、引用不存在的来源,并且无法忠实反映LLM在整个生成过程中对上下文的使用情况。本文提出MIRAGE——基于模型内部机制的RAG解释方法——一种即插即用的方法,利用模型内部机制实现RAG应用中忠实的答案归因。MIRAGE通过显著性方法检测对上下文敏感的答案标记,并将其与对其预测有贡献的检索文档进行配对。我们在一个多语言抽取式QA数据集上评估了所提出的方法,发现其与人工答案归因具有高度一致性。在开放式QA任务中,MIRAGE在引用质量和效率方面与自引用方法相当,同时还能对归因参数进行更细粒度的控制。我们的定性评估凸显了MIRAGE归因的忠实性,并强调了模型内部机制在RAG答案归因中具有广阔的应用前景。