This paper introduces xRAG, an innovative context compression method tailored for retrieval-augmented generation. xRAG reinterprets document embeddings in dense retrieval--traditionally used solely for retrieval--as features from the retrieval modality. By employing a modality fusion methodology, xRAG seamlessly integrates these embeddings into the language model representation space, effectively eliminating the need for their textual counterparts and achieving an extreme compression rate. In xRAG, the only trainable component is the modality bridge, while both the retriever and the language model remain frozen. This design choice allows for the reuse of offline-constructed document embeddings and preserves the plug-and-play nature of retrieval augmentation. Experimental results demonstrate that xRAG achieves an average improvement of over 10% across six knowledge-intensive tasks, adaptable to various language model backbones, ranging from a dense 7B model to an 8x7B Mixture of Experts configuration. xRAG not only significantly outperforms previous context compression methods but also matches the performance of uncompressed models on several datasets, while reducing overall FLOPs by a factor of 3.53. Our work pioneers new directions in retrieval-augmented generation from the perspective of multimodality fusion, and we hope it lays the foundation for future efficient and scalable retrieval-augmented systems
翻译:本文提出xRAG,一种专为检索增强生成设计的创新性上下文压缩方法。xRAG将稠密检索中的文档嵌入——传统上仅用于检索——重新解读为检索模态的特征。通过采用模态融合方法,xRAG将这些嵌入无缝集成到语言模型表示空间中,有效消除了对文本对应物的需求,实现了极致的压缩率。在xRAG中,唯一可训练的组件是模态桥接器,而检索器和语言模型均保持冻结状态。这种设计选择允许复用离线构建的文档嵌入,并保留了检索增强即插即用的特性。实验结果表明,xRAG在六项知识密集型任务中平均提升超过10%,可适配从稠密7B模型到8x7B混合专家配置的多种语言模型骨干。xRAG不仅显著优于先前的上下文压缩方法,还在多个数据集上匹配了未压缩模型的性能,同时将总体浮点运算量降低至原来的1/3.53。我们的工作从多模态融合视角开创了检索增强生成的新方向,希望为未来高效可扩展的检索增强系统奠定基础。