In-context learning (ICL) approaches typically leverage prompting to condition decoder-only language model generation on reference information. Just-in-time processing of a context is inefficient due to the quadratic cost of self-attention operations, and caching is desirable. However, caching transformer states can easily require almost as much space as the model parameters. When the right context isn't known in advance, caching ICL can be challenging. This work addresses these limitations by introducing models that, inspired by the encoder-decoder architecture, use cross-attention to condition generation on reference text without the prompt. More precisely, we leverage pre-trained decoder-only models and only train a small number of added layers. We use Question-Answering (QA) as a testbed to evaluate the ability of our models to perform conditional generation and observe that they outperform ICL, are comparable to fine-tuned prompted LLMs, and drastically reduce the space footprint relative to standard KV caching by two orders of magnitude.
翻译:上下文学习(ICL)方法通常利用提示技术,使仅解码器语言模型能够基于参考信息生成内容。由于自注意力操作的二次计算成本,对上下文进行即时处理效率低下,因此缓存机制备受期待。然而,缓存Transformer状态所需的空间容量可能接近模型参数量级。当右侧上下文无法预先获知时,缓存ICL面临严峻挑战。本研究针对这些局限性,提出一种受编码器-解码器架构启发的模型,该模型通过交叉注意力机制在不依赖提示的情况下,使文本生成过程以参考文本为条件。具体而言,我们基于预训练的仅解码器模型,仅对少量新增层进行训练。以问答(QA)任务作为测试平台,评估模型执行条件生成的能力。实验结果表明:该模型性能超越ICL方法,与基于提示的微调大语言模型表现相当,且相对于标准键值缓存的空间占用减少两个数量级。