Diffusion language models (DLMs) have emerged as a promising alternative to the long-dominant autoregressive (AR) paradigm, offering a parallelable decoding process that could yield greater efficiency. Yet, in practice, current open-source DLMs often underperform their AR counterparts in speed, limiting their real-world utility. This work presents a systematic study of DLM efficiency, identifying key issues in prior evaluation methods. Through empirical benchmarking and a roofline-based theoretical analysis, we demonstrate that AR models generally achieve higher throughput, while DLMs consistently lag. We also investigate acceleration strategies, finding that techniques like dual cache and parallel decoding mainly offer gains at small batch sizes, with their benefits diminishing upon scaling. Our findings underscore the necessity of robust evaluation methods and improved acceleration strategies to advance research on DLMs.
翻译:扩散语言模型(DLMs)已成为长期占主导地位的自回归(AR)范式的一种有前景的替代方案,其提供了可并行化的解码过程,有望带来更高的效率。然而,在实践中,当前的开源DLMs在速度上往往不及对应的AR模型,这限制了它们的实际应用价值。本研究对DLM效率进行了系统性考察,指出了先前评估方法中的关键问题。通过实证基准测试和基于屋顶线的理论分析,我们证明AR模型通常能实现更高的吞吐量,而DLMs则持续落后。我们还研究了加速策略,发现诸如双缓存和并行解码等技术主要在小批量处理时能带来增益,其优势在规模扩展时会减弱。我们的研究结果强调了采用稳健的评估方法和改进的加速策略对于推进DLM研究的必要性。