Ensuring the verifiability of model answers is a fundamental challenge for retrieval-augmented generation (RAG) in the question answering (QA) domain. Recently, self-citation prompting was proposed to make large language models (LLMs) generate citations to supporting documents along with their answers. However, self-citing LLMs often struggle to match the required format, refer to non-existent sources, and fail to faithfully reflect LLMs' context usage throughout the generation. In this work, we present MIRAGE --Model Internals-based RAG Explanations -- a plug-and-play approach using model internals for faithful answer attribution in RAG applications. MIRAGE detects context-sensitive answer tokens and pairs them with retrieved documents contributing to their prediction via saliency methods. We evaluate our proposed approach on a multilingual extractive QA dataset, finding high agreement with human answer attribution. On open-ended QA, MIRAGE achieves citation quality and efficiency comparable to self-citation while also allowing for a finer-grained control of attribution parameters. Our qualitative evaluation highlights the faithfulness of MIRAGE's attributions and underscores the promising application of model internals for RAG answer attribution.
翻译:确保模型答案的可验证性是检索增强生成(RAG)在问答(QA)领域面临的根本性挑战。近期提出的自引用提示方法旨在使大语言模型(LLM)在生成答案的同时,自动引用支持性文档。然而,具备自引用能力的LLM往往难以遵循规定的引用格式、引用不存在的来源,且无法忠实反映LLM在生成过程中对上下文信息的实际使用情况。本研究提出MIRAGE——基于模型内部机制的检索增强生成解释框架,这是一种即插即用的方法,利用模型内部机制实现RAG应用中可靠的答案归因。MIRAGE通过显著性分析方法,检测对上下文敏感的答案词汇,并将其与影响其预测的检索文档进行关联。我们在多语言抽取式问答数据集上评估了所提方法,发现其与人工答案归因结果具有高度一致性。在开放式问答任务中,MIRAGE在引用质量与效率方面与自引用方法相当,同时还能实现对归因参数更精细的控制。我们的定性评估验证了MIRAGE归因结果的忠实性,并凸显了模型内部机制在RAG答案归因中的应用潜力。