The transformer's context window is vital for tasks such as few-shot learning and conditional generation as it preserves previous tokens for active memory. However, as the context lengths increase, the computational costs grow quadratically, hindering the deployment of large language models (LLMs) in real-world, long sequence scenarios. Although some recent key-value caching (KV Cache) methods offer linear inference complexity, they naively manage the stored context, prematurely evicting tokens and losing valuable information. Moreover, they lack an optimized prefill/prompt stage strategy, resulting in higher latency than even quadratic attention for realistic context sizes. In response, we introduce a novel mechanism that leverages cascading sub-cache buffers to selectively retain the most relevant tokens, enabling the model to maintain longer context histories without increasing the cache size. Our approach outperforms linear caching baselines across key benchmarks, including streaming perplexity, question answering, book summarization, and passkey retrieval, where it retains better retrieval accuracy at 1M tokens after four doublings of the cache size of 65K. Additionally, our method reduces prefill stage latency by a factor of 6.8 when compared to flash attention on 1M tokens. These innovations not only enhance the computational efficiency of LLMs but also pave the way for their effective deployment in resource-constrained environments, enabling large-scale, real-time applications with significantly reduced latency.
翻译:Transformer的上下文窗口对于少样本学习和条件生成等任务至关重要,因为它能为当前处理保留先前令牌的活跃记忆。然而,随着上下文长度的增加,计算成本呈二次方增长,阻碍了大型语言模型(LLMs)在现实世界长序列场景中的部署。尽管近期一些键值缓存(KV Cache)方法提供了线性推理复杂度,但它们对存储上下文的管理方式较为简单,会过早淘汰令牌并丢失有价值信息。此外,这些方法缺乏针对预填充/提示阶段的优化策略,导致在实际上下文规模下,其延迟甚至高于二次方注意力机制。为此,我们提出一种新颖机制,该机制利用级联子缓存缓冲区选择性保留最相关的令牌,使模型能在不增加缓存容量的情况下维持更长的上下文历史。我们的方法在多项关键基准测试中均优于线性缓存基线,包括流式困惑度、问答、书籍摘要和密钥检索任务——在缓存容量从65K经过四次倍增后,我们的方法在100万令牌规模下仍能保持更优的检索准确率。此外,与Flash Attention在100万令牌上的表现相比,我们的方法将预填充阶段延迟降低了6.8倍。这些创新不仅提升了LLMs的计算效率,更为其在资源受限环境中的有效部署开辟了道路,使得大规模实时应用的延迟得以显著降低。