Long-context inference enhances the reasoning capability of Large Language Models (LLMs), but incurs significant computational overhead. Token-oriented methods, such as pruning and skipping, have shown great promise in reducing inference latency, yet still suffer from inherently insufficient structure optimization, outdated selection criteria, and redundancy interference, resulting in suboptimal speed-accuracy trade-off. To address these issues, we propose a novel training-free framework dubbed Self-Predictive Token Skipping (SPTS), for efficient long-context LLM inference. Specifically, motivated by probing the influence of target layers prior to skipping, we design two selective token skipping strategies for typical structures, including Partial Attention Probing (PAP) for multi-head attention and Low-rank Transformation Probing (LTP) for feed forward network. The former selects informative tokens via partial forward attention computation, while the latter constructs a low-rank proxy network to predict token transformations. In addition, a Multi-Stage Delayed Pruning (MSDP) strategy reallocates skipping budgets and progressively removes redundant tokens across layers. Extensive experiments display the effectiveness of our method, achieving up to 2.46$\times$ and 2.29$\times$ speedups for prefilling and end-to-end generation, respectively, while maintaining state-of-the-art accuracy. We will release the source code upon acceptance.
翻译:长上下文推理增强了大语言模型(LLM)的推理能力,但带来了显著的计算开销。面向令牌的方法(如剪枝和跳过)在降低推理延迟方面展现出巨大潜力,但仍受限于固有的结构优化不足、过时的选择标准以及冗余干扰,导致速度-准确率权衡未能达到最优。为解决这些问题,我们提出了一种新颖的无训练框架,称为自预测令牌跳过(SPTS),用于高效的长上下文LLM推理。具体而言,受在跳过前探测目标层影响的启发,我们为典型结构设计了两种选择性令牌跳过策略,包括针对多头注意力的部分注意力探测(PAP)和针对前馈网络的低秩变换探测(LTP)。前者通过部分前向注意力计算选择信息丰富的令牌,而后者构建一个低秩代理网络来预测令牌变换。此外,一种多阶段延迟剪枝(MSDP)策略重新分配跳过预算,并在各层间逐步移除冗余令牌。大量实验证明了我们方法的有效性,在保持最先进准确率的同时,预填充和端到端生成分别实现了高达2.46倍和2.29倍的加速。我们将在论文被接受后开源代码。