Large language models (LLMs) often hallucinate and lack the ability to provide attribution for their generations. Semi-parametric LMs, such as kNN-LM, approach these limitations by refining the output of an LM for a given prompt using its nearest neighbor matches in a non-parametric data store. However, these models often exhibit slow inference speeds and produce non-fluent texts. In this paper, we introduce Nearest Neighbor Speculative Decoding (NEST), a novel semi-parametric language modeling approach that is capable of incorporating real-world text spans of arbitrary length into the LM generations and providing attribution to their sources. NEST performs token-level retrieval at each inference step to compute a semi-parametric mixture distribution and identify promising span continuations in a corpus. It then uses an approximate speculative decoding procedure that accepts a prefix of the retrieved span or generates a new token. NEST significantly enhances the generation quality and attribution rate of the base LM across a variety of knowledge-intensive tasks, surpassing the conventional kNN-LM method and performing competitively with in-context retrieval augmentation. In addition, NEST substantially improves the generation speed, achieving a 1.8x speedup in inference time when applied to Llama-2-Chat 70B.
翻译:大语言模型(LLMs)常出现幻觉现象,且无法为其生成内容提供来源归因。半参数化语言模型(如kNN-LM)通过使用非参数化数据存储中的最近邻匹配来优化给定提示下的语言模型输出,从而应对这些局限性。然而,此类模型通常存在推理速度缓慢、生成文本流畅度不足的问题。本文提出最近邻推测解码(NEST),这是一种新颖的半参数化语言建模方法,能够将任意长度的真实文本片段融入语言模型生成过程,并提供来源归因。NEST在每一步推理过程中执行词元级检索,计算半参数化混合分布,并在语料库中识别具有潜力的片段延续。随后采用近似推测解码流程,可选择接受检索片段的词元前缀或生成新词元。在多种知识密集型任务中,NEST显著提升了基础语言模型的生成质量与归因率,其性能超越传统kNN-LM方法,并与上下文检索增强方法形成竞争态势。此外,NEST大幅提升了生成速度,在Llama-2-Chat 70B模型上实现推理时间1.8倍的加速。