The inference of large language models imposes significant computational workloads, often requiring the processing of billions of parameters. Although early-exit strategies have proven effective in reducing computational demands by halting inference earlier, they apply either to only the first token in the generation phase or at the prompt level in the prefill phase. Thus, the Key-Value (KV) cache for skipped layers remains a bottleneck for subsequent token generation, limiting the benefits of early exit. We introduce ADEPT (Adaptive Dynamic Early-exit Process for Transformers), a novel approach designed to overcome this issue and enable dynamic early exit in both the prefill and generation phases. The proposed adaptive token-level early-exit mechanism adjusts computation dynamically based on token complexity, optimizing efficiency without compromising performance. ADEPT further enhances KV generation procedure by decoupling sequential dependencies in skipped layers, making token-level early exit more practical. Experimental results demonstrate that ADEPT improves efficiency by up to 25% in language generation tasks and achieves a 4x speed-up in downstream classification tasks, with up to a 45% improvement in performance.
翻译:大型语言模型的推理过程带来了巨大的计算负担,通常需要处理数十亿参数。尽管早退策略通过提前终止推理被证明能有效降低计算需求,但它们要么仅适用于生成阶段的首个令牌,要么仅作用于预填充阶段的提示级别。因此,被跳过层级的键值(KV)缓存仍是后续令牌生成的瓶颈,限制了早退策略的收益。本文提出ADEPT(面向Transformer的自适应动态早退过程),这是一种旨在克服此问题并实现预填充与生成阶段动态早退的新方法。所提出的自适应令牌级早退机制能根据令牌复杂度动态调整计算量,在保证性能的同时优化效率。ADEPT通过解耦被跳过层中的序列依赖关系,进一步优化了KV生成流程,使令牌级早退更具实用性。实验结果表明,ADEPT在语言生成任务中将效率提升最高达25%,在下游分类任务中实现4倍加速,且性能最高提升45%。