Despite their strong performance on reasoning tasks, large reasoning models (LRMs) often suffer from overthinking, producing unnecessarily long outputs and incurring high end-to-end latency, a significant limitation to their real-world deployment. To address overthinking, early-exit mechanisms have been proposed to terminate reasoning before typical completion, showing that this approach can effectively shorten generation length with minimal impact on accuracy. However, their reliance on probing mechanisms introduces a detection overhead that limits their end-to-end latency gains and compromises their generalizability across diverse problems. Inspired by the use of hidden states in speculative decoding, we propose SpecExit, a novel framework that predicts both future tokens and an early-exit signal directly from a lightweight draft model without probing overhead. Our method offers significant improvements, reducing average generation length by 66\% and achieving a 2.5x speedup in end-to-end latency compared to the speculative decoding baseline, without compromising accuracy. Our method leverages the inherent signals from hidden states to provide effective early-exit signals, suggesting broader use of hidden states for efficient reasoning. Our code is available at https://github.com/Tencent/AngelSlim.
翻译:尽管大型推理模型(LRMs)在推理任务上表现出色,但它们常常遭受“过度思考”问题,产生不必要的冗长输出并导致较高的端到端延迟,这严重限制了其在实际场景中的部署。为解决过度思考问题,已有研究提出提前退出机制,即在典型推理完成前终止推理过程。研究表明,这种方法能有效缩短生成长度,且对准确率影响极小。然而,现有方法依赖探测机制,这引入了检测开销,限制了其端到端延迟收益,并削弱了其在不同问题间的泛化能力。受推测解码中隐藏状态应用的启发,我们提出SpecExit,这是一个新颖的框架,它通过一个轻量级草稿模型直接预测未来令牌和提前退出信号,无需探测开销。我们的方法带来了显著改进:与推测解码基线相比,平均生成长度减少了66%,端到端延迟实现了2.5倍的加速,且未牺牲准确率。我们的方法利用隐藏状态的内在信号来提供有效的提前退出信号,这表明隐藏状态在高效推理方面具有更广泛的用途。我们的代码可在 https://github.com/Tencent/AngelSlim 获取。