Discrete diffusion models have recently become competitive with autoregressive models for language modeling, even outperforming them on reasoning tasks requiring planning and global coherence, but they require more computation at inference time. We trace this trade-off to a key mechanism: diffusion models are trained to jointly predict a distribution over all unknown tokens, including those that will not actually be decoded in the current step. Ablating this joint prediction yields faster inference but degrades performance, revealing that accurate prediction at the decoded position relies on joint reasoning about the distribution of undecoded tokens. We interpret these as latent tokens and introduce a method for modulating their number, demonstrating empirically that this enables a smooth tradeoff between inference speed and sample quality. Furthermore, we demonstrate that latent tokens can be introduced into autoregressive models through an auxiliary multi-token prediction objective, yielding substantial improvements on the same reasoning tasks where they have traditionally struggled. Our results suggest that latent tokens, while arising naturally in diffusion, represent a general mechanism for improving performance on tasks requiring global coherence or lookahead.
翻译:离散扩散模型最近在语言建模任务中已能与自回归模型竞争,甚至在需要规划与全局一致性的推理任务上表现更优,但其推理阶段需要更多计算量。我们通过分析发现这种权衡源于一个关键机制:扩散模型被训练为同时预测所有未知标记的分布,包括当前步骤中实际不会被解码的标记。若消除这种联合预测能力,推理速度会加快但性能会下降,这表明在解码位置进行准确预测需要依赖对未解码标记分布的联合推理。我们将这些未解码标记解释为潜在标记,并提出一种调节其数量的方法,通过实证证明该方法能在推理速度与生成质量之间实现平滑权衡。此外,我们展示了通过辅助的多标记预测目标,可将潜在标记机制引入自回归模型,从而在传统表现欠佳的相同推理任务上取得显著改进。我们的研究结果表明,潜在标记虽然自然产生于扩散过程,但其本质是一种通用机制,能够提升需要全局一致性或前瞻能力的任务性能。