We present self-speculative masked diffusions, a new class of masked diffusion generative models for discrete data that require significantly fewer function evaluations to generate samples. Standard masked diffusion models predict factorized logits over currently masked positions. A number of masked positions are then sampled, however, the factorization approximation means that sampling too many positions in one go leads to poor sample quality. As a result, many simulation steps and therefore neural network function evaluations are required to generate high-quality data. We reduce the computational burden by generating non-factorized predictions over masked positions. This is achieved by modifying the final transformer attention mask from non-causal to causal, enabling draft token generation and parallel validation via a novel, model-integrated speculative sampling mechanism. This results in a non-factorized predictive distribution over masked positions in a single forward pass. We apply our method to GPT2 scale text modelling and protein sequence generation, finding that we can achieve a ~2x reduction in the required number of network forward passes relative to standard masked diffusion models.
翻译:我们提出自推测掩码扩散模型,这是一类用于离散数据的新型掩码扩散生成模型,其生成样本所需函数评估次数显著减少。标准掩码扩散模型预测当前掩码位置上因子化的对数概率。随后对若干掩码位置进行采样,然而因子化近似意味着单次采样过多位置会导致样本质量下降。因此,生成高质量数据需要大量模拟步骤及相应的神经网络函数评估。我们通过生成掩码位置上的非因子化预测来降低计算负担。这是通过将最终Transformer注意力掩码从非因果修改为因果来实现的,借助一种新颖的模型集成推测采样机制,实现了草稿标记的生成与并行验证。该方法可在单次前向传播中获得掩码位置上的非因子化预测分布。我们将该方法应用于GPT2规模的文本建模和蛋白质序列生成,发现相较于标准掩码扩散模型,我们能够将所需网络前向传播次数减少约2倍。