Context compression is an advanced technique that accelerates large language model (LLM) inference by converting long inputs into compact representations. Existing methods primarily rely on autoencoding tasks to train special compression tokens to represent contextual semantics. While autoencoding tasks enable compression tokens to acquire compression capabilities, we remark that such capabilities potentially conflict with actual downstream task requirements, prevent the models from learning the features more beneficial for real-world usage. Based on this observation, we propose Semantic-Anchor Compression (SAC), a novel method that shifts from autoencoding task based compression to an architecture that is equipped with this compression capability \textit{a priori}. Instead of training models to compress contexts through autoencoding tasks, SAC directly selects so-called anchor tokens from the original context and aggregates contextual information into their key-value (KV) representations. To ensure that anchors can effectively collect information, SAC introduces two key designs: (1) anchor embedding, a learnable embedding vector attached to the selected anchor tokens to mark compression carriers and (2) bidirectional attention modification, which enables anchor tokens to integrate information from the entire context. Experimental results show that SAC consistently outperforms existing context compression methods across different compression ratios and model sizes on question-answering and long-context summarization tasks. Our data, model and code have been released at \href{https://github.com/lx-Meteors/SAC}{https://github.com/lx-Meteors/SAC}.
翻译:上下文压缩是一种通过将长输入转换为紧凑表示来加速大语言模型推理的高级技术。现有方法主要依赖自编码任务来训练特殊的压缩标记以表示上下文语义。虽然自编码任务使压缩标记获得了压缩能力,但我们指出这种能力可能与实际下游任务需求存在潜在冲突,阻碍模型学习对实际应用更有益的特征。基于这一观察,我们提出了语义锚点压缩方法,这是一种将基于自编码任务的压缩范式转变为具备先验压缩能力架构的新方法。SAC不再通过自编码任务训练模型压缩上下文,而是直接从原始上下文中选择所谓的锚点标记,并将上下文信息聚合到其键值表示中。为确保锚点能有效收集信息,SAC引入了两个关键设计:(1)锚点嵌入:附加于选定锚点标记的可学习嵌入向量,用于标记压缩载体;(2)双向注意力修正:使锚点标记能够整合来自整个上下文的信息。实验结果表明,在问答和长上下文摘要任务中,SAC在不同压缩率和模型规模下均持续优于现有上下文压缩方法。我们的数据、模型和代码已发布于\href{https://github.com/lx-Meteors/SAC}{https://github.com/lx-Meteors/SAC}。