Diffusion language models are a promising alternative to autoregressive models due to their potential for faster generation. Among discrete diffusion approaches, Masked diffusion currently dominates, largely driven by strong perplexity on language modeling benchmarks. In this work, we present the first scaling law study of uniform-state and interpolating discrete diffusion methods. We also show that Masked diffusion models can be made approximately 12% more FLOPs-efficient when trained with a simple cross-entropy objective. We find that perplexity is informative within a diffusion family but can be misleading across families, where models with worse likelihood scaling may be preferable due to faster and more practical sampling, as reflected by the speed-quality Pareto frontier. These results challenge the view that Masked diffusion is categorically the future of diffusion language modeling and that perplexity alone suffices for cross-algorithm comparison. Scaling all methods to 1.7B parameters, we show that uniform-state diffusion remains competitive on likelihood-based benchmarks and outperforms autoregressive and Masked diffusion models on GSM8K, despite worse validation perplexity. We provide the code, model checkpoints, and video tutorials on the project page: http://s-sahoo.github.io/scaling-dllms
翻译:扩散语言模型因其具备更快速生成的潜力,成为自回归模型的一种有前景的替代方案。在离散扩散方法中,掩码扩散目前占据主导地位,这主要得益于其在语言建模基准测试中表现出的强大困惑度。在本研究中,我们首次对均匀状态和插值离散扩散方法进行了缩放定律研究。我们还表明,当使用简单的交叉熵目标进行训练时,掩码扩散模型的浮点运算效率可提升约12%。我们发现,困惑度在扩散模型家族内部具有参考价值,但在不同家族之间可能产生误导——尽管似然缩放表现较差,但由于采样速度更快、更实用(这反映在速度-质量的帕累托前沿上),某些模型可能更具优势。这些结果挑战了“掩码扩散必然是扩散语言建模的未来”以及“仅凭困惑度就足以进行跨算法比较”的观点。将所有方法缩放至17亿参数后,我们发现均匀状态扩散在基于似然的基准测试中仍具竞争力,并且在GSM8K上超越了自回归模型和掩码扩散模型,尽管其验证困惑度表现较差。我们在项目页面提供了代码、模型检查点和视频教程:http://s-sahoo.github.io/scaling-dllms