Masked (or absorbing) diffusion is actively explored as an alternative to autoregressive models for generative modeling of discrete data. However, existing work in this area has been hindered by unnecessarily complex model formulations and unclear relationships between different perspectives, leading to suboptimal parameterization, training objectives, and ad hoc adjustments to counteract these issues. In this work, we aim to provide a simple and general framework that unlocks the full potential of masked diffusion models. We show that the continuous-time variational objective of masked diffusion models is a simple weighted integral of cross-entropy losses. Our framework also enables training generalized masked diffusion models with state-dependent masking schedules. When evaluated by perplexity, our models trained on OpenWebText surpass prior diffusion language models at GPT-2 scale and demonstrate superior performance on 4 out of 5 zero-shot language modeling tasks. Furthermore, our models vastly outperform previous discrete diffusion models on pixel-level image modeling, achieving 2.78~(CIFAR-10) and 3.42 (ImageNet 64$\times$64) bits per dimension that are comparable or better than autoregressive models of similar sizes.
翻译:掩码(或吸收式)扩散作为自回归模型在离散数据生成建模中的替代方案正被积极探索。然而,现有研究因模型公式过度复杂化及不同视角间关系不明确而受到阻碍,导致参数化方法、训练目标次优以及为应对这些问题而采用的临时调整策略。本文旨在提供一个简单通用的框架,充分释放掩码扩散模型的潜力。我们证明,连续时间变分目标可简化为交叉熵损失的加权积分。该框架还支持基于状态相关掩码调度训练泛化掩码扩散模型。在困惑度评估中,基于OpenWebText训练的模型在GPT-2规模上超越了先前的扩散语言模型,并在5项零样本语言建模任务中的4项上展现出更优性能。此外,我们的模型在像素级图像建模中大幅超越此前离散扩散模型,在CIFAR-10上达到2.78比特/维,在ImageNet 64×64上达到3.42比特/维,其性能与同规模自回归模型相当或更优。