The Retrieval-Augmented Language Model (RALM) has shown remarkable performance on knowledge-intensive tasks by incorporating external knowledge during inference, which mitigates the factual hallucinations inherited in large language models (LLMs). Despite these advancements, challenges persist in the implementation of RALMs, particularly concerning their reliability and traceability. To be specific, the irrelevant document retrieval may result in unhelpful response generation or even deteriorate the performance of LLMs, while the lack of proper citations in generated outputs complicates efforts to verify the trustworthiness of the models. To this end, we propose a novel self-reasoning framework aimed at improving the reliability and traceability of RALMs, whose core idea is to leverage reasoning trajectories generated by the LLM itself. The framework involves constructing self-reason trajectories with three processes: a relevance-aware process, an evidence-aware selective process, and a trajectory analysis process. We have evaluated our framework across four public datasets (two short-form QA datasets, one long-form QA dataset, and one fact verification dataset) to demonstrate the superiority of our method, which can outperform existing state-of-art models and can achieve comparable performance with GPT-4, while only using 2,000 training samples.
翻译:检索增强语言模型(RALM)通过在推理过程中引入外部知识,在知识密集型任务上展现出卓越性能,从而缓解了大语言模型(LLM)固有的知识幻觉问题。尽管取得了这些进展,RALM在实际应用中仍面临挑战,尤其是在可靠性与可追溯性方面。具体而言,无关文档的检索可能导致无益的响应生成,甚至损害LLM的性能;而生成输出中缺乏恰当引用,使得验证模型可信度的努力变得复杂。为此,我们提出了一种新颖的自推理框架,旨在提升RALM的可靠性与可追溯性,其核心思想是利用LLM自身生成的推理轨迹。该框架通过三个过程构建自推理轨迹:相关性感知过程、证据感知选择过程以及轨迹分析过程。我们在四个公开数据集(两个短问答数据集、一个长问答数据集和一个事实核查数据集)上评估了我们的框架,结果表明我们的方法能够超越现有最先进模型,并在仅使用2,000个训练样本的情况下,达到与GPT-4相当的性能。