Diffusion language models generate text through iterative refinement, a process that is often computationally inefficient because many tokens reach stability long before the final denoising step. We introduce a training-free, token-level early stopping approach that identifies convergence independently at each position. Our method leverages lightweight signals derived from the model's predictions and local context to dynamically determine when individual tokens can be finalized. This yields adaptive per-token freezing without task-specific fine-tuning, substantially reducing the total number of diffusion steps required. Across diverse benchmarks, spanning mathematical reasoning, general question answering, and scientific understanding, our approach achieves state-of-the-art efficiency gains while preserving generation quality.
翻译:扩散语言模型通过迭代优化生成文本,这一过程通常在计算上效率低下,因为许多词元在最终去噪步骤之前早已达到稳定状态。我们提出了一种无需训练的词元级早停方法,能够在每个位置独立识别收敛性。该方法利用从模型预测和局部上下文提取的轻量级信号,动态判定单个词元何时可被最终确定。这实现了无需任务特定微调的自适应逐词元冻结,显著减少了所需的扩散步骤总数。在涵盖数学推理、通用问答和科学理解等多个基准测试中,我们的方法在保持生成质量的同时,实现了最先进的效率提升。