This paper introduces Neurocache, an approach to extend the effective context size of large language models (LLMs) using an external vector cache to store its past states. Like recent vector retrieval approaches, Neurocache uses an efficient k-nearest-neighbor (kNN) algorithm to retrieve relevant past states and incorporate them into the attention process. Neurocache improves upon previous methods by (1) storing compressed states, which reduces cache size; (2) performing a single retrieval operation per token which increases inference speed; and (3) extending the retrieval window to neighboring states, which improves both language modeling and downstream task accuracy. Our experiments show the effectiveness of Neurocache both for models trained from scratch and for pre-trained models such as Llama2-7B and Mistral-7B when enhanced with the cache mechanism. We also compare Neurocache with text retrieval methods and show improvements in single-document question-answering and few-shot learning tasks. We made the source code available under: https://github.com/alisafaya/neurocache
翻译:本文提出神经缓存(Neurocache)方法,通过外部向量缓存存储历史状态来扩展大语言模型(LLM)的有效上下文长度。与近期向量检索方法类似,神经缓存采用高效的k近邻(kNN)算法检索相关历史状态,并将其整合至注意力计算过程中。本方法在以下方面改进了现有技术:(1)存储压缩状态以降低缓存容量;(2)每个词元仅执行单次检索操作以提升推理速度;(3)将检索窗口扩展至相邻状态,从而提升语言建模与下游任务精度。实验结果表明,神经缓存对从头训练的模型及预训练模型(如Llama2-7B与Mistral-7B)在增强缓存机制后均具有显著效果。通过与文本检索方法的对比研究,本方法在单文档问答和少样本学习任务中展现出性能优势。源代码已发布于:https://github.com/alisafaya/neurocache