Retrieval-Augmented Generation (RAG) helps LLMs stay accurate, but feeding long documents into a prompt makes the model slow and expensive. This has motivated context compression, ranging from token pruning and summarization to embedding-based compression. While researchers have tried ''compressing'' these documents into smaller summaries or mathematical embeddings, there is a catch: the more you compress the data, the more the LLM struggles to understand it. To address this challenge, we propose ArcAligner (Adaptive recursive context *Aligner*), a lightweight module integrated into the language model layers to help the model better utilize highly compressed context representations for downstream generation. It uses an adaptive ''gating'' system that only adds extra processing power when the information is complex, keeping the system fast. Across knowledge-intensive QA benchmarks, ArcAligner consistently beats compression baselines at comparable compression rates, especially on multi-hop and long-tail settings. The source code is publicly available.
翻译:检索增强生成(RAG)有助于保持大语言模型的准确性,但将长文档输入提示会使模型运行缓慢且成本高昂。这推动了上下文压缩技术的发展,其方法涵盖从令牌剪枝、摘要生成到基于嵌入的压缩。尽管研究者尝试将这些文档“压缩”为更小的摘要或数学嵌入,但存在一个难题:数据压缩程度越高,大语言模型理解其内容的难度就越大。为应对这一挑战,我们提出ArcAligner(自适应递归上下文对齐器),这是一个集成到语言模型层的轻量级模块,旨在帮助模型更好地利用高度压缩的上下文表示进行下游生成。该模块采用自适应“门控”系统,仅在信息复杂度较高时启用额外处理能力,从而保持系统运行效率。在知识密集型问答基准测试中,ArcAligner在相近压缩率下持续超越现有压缩基线方法,尤其在多跳推理和长尾场景中表现突出。源代码已公开提供。