Masked generative models (MGMs) can generate tokens in parallel and in any order, unlike autoregressive models (ARMs), which decode one token at a time, left-to-right. However, MGMs process the full-length sequence at every sampling step, including mask tokens that carry no information. In contrast, ARMs process only the previously generated tokens. We introduce ``Partition Generative Models'' (PGMs), which replace masking with partitioning. Tokens are split into two groups that cannot attend to each other, and the model learns to predict each group conditioned on the other, eliminating mask tokens entirely. Because the groups do not interact, PGMs can process only the clean tokens during sampling, like ARMs, while retaining parallel, any-order generation, like MGMs. On OpenWebText, PGMs achieve $5-5.5\times$ higher throughput than MDLM while producing samples with lower Generative Perplexity. On ImageNet, PGMs reach comparable FID to MaskGIT with a $7.5\times$ throughput improvement. With twice as many steps, the FID improves to 4.56 while remaining $3.9\times$ faster than MGMs. Finally, PGMs remain compatible with existing MGM samplers and distillation methods.
翻译:掩码生成模型(MGM)能够并行且以任意顺序生成标记,这与自回归模型(ARM)逐标记从左到右解码的方式不同。然而,MGM在每个采样步骤中都需要处理完整长度的序列,其中包括不携带任何信息的掩码标记。相比之下,ARM仅处理先前已生成的标记。我们提出了“分区生成模型”(PGM),其用分区替代了掩码操作。标记被分割为两个互不关注的组,模型学习在给定另一组条件下预测每一组,从而完全消除了掩码标记。由于两组之间不进行交互,PGM在采样过程中可以像ARM一样仅处理干净的标记,同时保留像MGM一样的并行、任意顺序生成能力。在OpenWebText数据集上,PGM实现了比MDLM高$5-5.5\times$的吞吐量,同时生成的样本具有更低的生成困惑度。在ImageNet数据集上,PGM达到了与MaskGIT相当的FID分数,同时吞吐量提升了$7.5\times$。当采样步骤增加一倍时,FID改进至4.56,同时仍比MGM快$3.9\times$。最后,PGM保持了与现有MGM采样器和蒸馏方法的兼容性。