Long-context inference enhances the reasoning capability of Large Language Models (LLMs) while incurring significant computational overhead. Token-oriented methods, such as pruning and skipping, have shown promise in reducing inference latency, but still suffer from inherently limited acceleration potential, outdated proxy signals, and redundancy interference, thus yielding suboptimal speed-accuracy trade-offs. To address these challenges, we propose SPTS (Self-Predictive Token Skipping), a training-free framework for efficient long-context LLM inference. Specifically, motivated by the thought of probing the influence of targeted skipping layers, we design two component-specific strategies for selective token skipping: Partial Attention Probing (PAP) for multi-head attention, which selects informative tokens by performing partial forward attention computation, and Low-rank Transformation Probing (LTP) for feed forward network, which constructs a low-rank proxy network to predict token transformations. Furthermore, a Multi-Stage Delayed Pruning (MSDP) strategy reallocates the skipping budget and progressively prunes redundant tokens across layers. Extensive experiments demonstrate the effectiveness of our method, achieving up to 2.46$\times$ and 2.29$\times$ speedups for prefilling and end-to-end generation, respectively, while maintaining state-of-the-art model performance. The source code will be publicly available upon paper acceptance.
翻译:长上下文推理增强了大型语言模型(LLM)的推理能力,同时带来了显著的计算开销。面向令牌的方法(如剪枝和跳过)在降低推理延迟方面显示出潜力,但仍受限于固有的加速潜力有限、代理信号过时以及冗余干扰等问题,导致速度-准确率权衡欠佳。为解决这些挑战,我们提出了SPTS(自预测令牌跳过),一种无需训练的高效长上下文LLM推理框架。具体而言,受探测目标跳过层影响的思路启发,我们设计了两种面向特定组件的选择性令牌跳过策略:针对多头注意力的部分注意力探测(PAP),通过执行部分前向注意力计算来选择信息丰富的令牌;以及针对前馈网络的低秩变换探测(LTP),通过构建低秩代理网络来预测令牌变换。此外,多阶段延迟剪枝(MSDP)策略重新分配跳过预算,并在各层间逐步剪枝冗余令牌。大量实验证明了我们方法的有效性,在保持最先进模型性能的同时,预填充和端到端生成分别实现了高达2.46倍和2.29倍的加速。源代码将在论文录用后公开提供。