Retrieval-augmented large language models (LLMs) have been remarkably competent in various NLP tasks. However, it was observed by previous works that retrieval is not always helpful, especially when the LLM is already knowledgeable on the query to answer. Motivated by this, Adaptive Retrieval-Augmented Generation (ARAG) studies retrieving only when the knowledge asked by the query is absent in the LLM. Previous works of ARAG either require accessing the pre-training corpus or prompting with additional model inferences. Aiming to avoid such drawbacks, we propose to determine whether the model is knowledgeable on a query via inspecting the (contextualized) pre-trained token embeddings of LLMs. We hypothesize that such embeddings capture rich information on the model's intrinsic knowledge base, which enables an efficient way of judging the necessity to retrieve from an external corpus. Extensive experiments demonstrate our ARAG approach's superior performance across various benchmarks.
翻译:检索增强的大语言模型(LLMs)在各种自然语言处理任务中表现出卓越的能力。然而,先前的研究观察到,检索并非总是有益的,尤其是在LLM已具备回答查询所需知识的情况下。受此启发,自适应检索增强生成(ARAG)致力于研究仅在查询所需知识缺失于LLM内部时进行检索的策略。现有的ARAG方法或需访问预训练语料库,或需通过额外模型推理进行提示。为规避这些缺陷,本文提出通过检测LLM的(上下文)预训练词元嵌入来判断模型是否掌握查询相关知识。我们假设此类嵌入蕴含了模型内在知识库的丰富信息,从而为判断是否需要从外部语料库检索提供了一种高效途径。大量实验表明,我们的ARAG方法在多个基准测试中均取得了优越性能。