Parallel decoding methods such as Jacobi decoding show promise for more efficient LLM inference as it breaks the sequential nature of the LLM decoding process and transforms it into parallelizable computation. However, in practice, it achieves little speedup compared to traditional autoregressive (AR) decoding, primarily because Jacobi decoding seldom accurately predicts more than one token in a single fixed-point iteration step. To address this, we develop a new approach aimed at realizing fast convergence from any state to the fixed point on a Jacobi trajectory. This is accomplished by refining the target LLM to consistently predict the fixed point given any state as input. Extensive experiments demonstrate the effectiveness of our method, showing 2.4$\times$ to 3.4$\times$ improvements in generation speed while preserving generation quality across both domain-specific and open-domain benchmarks.
翻译:诸如雅可比解码之类的并行解码方法通过打破大语言模型解码过程的序列性并将其转化为可并行计算,为提升大语言模型推理效率展现了潜力。然而,在实践中,相较于传统的自回归解码方法,其加速效果甚微,这主要是因为雅可比解码在单个不动点迭代步骤中很少能准确预测超过一个词元。为解决此问题,我们提出了一种新方法,旨在实现从雅可比轨迹上的任意状态到不动点的快速收敛。该方法通过精调目标大语言模型,使其在给定任意输入状态时能一致地预测不动点。大量实验证明了我们方法的有效性,在保持领域特定和开放领域基准测试中生成质量的同时,实现了生成速度2.4$\times$至3.4$\times$的提升。