With the rising popularity of Transformer-based large language models (LLMs), reducing their high inference costs has become a significant research focus. One effective approach is to compress the long input contexts. Existing methods typically leverage the self-attention mechanism of the LLM itself for context compression. While these methods have achieved notable results, the compression process still involves quadratic time complexity, which limits their applicability. To mitigate this limitation, we propose the In-Context Former (IC-Former). Unlike previous methods, IC-Former does not depend on the target LLMs. Instead, it leverages the cross-attention mechanism and a small number of learnable digest tokens to directly condense information from the contextual word embeddings. This approach significantly reduces inference time, which achieves linear growth in time complexity within the compression range. Experimental results indicate that our method requires only 1/32 of the floating-point operations of the baseline during compression and improves processing speed by 68 to 112 times while achieving over 90% of the baseline performance on evaluation metrics. Overall, our model effectively reduces compression costs and makes real-time compression scenarios feasible.
翻译:随着基于Transformer的大语言模型日益普及,降低其高昂的推理成本已成为重要的研究焦点。一种有效的方法是压缩长输入上下文。现有方法通常利用LLM自身的自注意力机制进行上下文压缩。尽管这些方法已取得显著成果,但压缩过程仍涉及二次时间复杂度,这限制了其应用范围。为缓解这一局限,我们提出了In-Context Former。与先前方法不同,IC-Former不依赖于目标LLM,而是利用交叉注意力机制和少量可学习的摘要令牌,直接从上下文词嵌入中压缩信息。该方法显著减少了推理时间,在压缩范围内实现了时间复杂度线性增长。实验结果表明,我们的方法在压缩阶段仅需基线浮点运算量的1/32,处理速度提升68至112倍,同时在评估指标上达到基线性能的90%以上。总体而言,我们的模型有效降低了压缩成本,使实时压缩场景成为可能。