In generative models, two paradigms have gained attraction in various applications: next-set prediction-based Masked Generative Models and next-noise prediction-based Non-Autoregressive Models, e.g., Diffusion Models. In this work, we propose using discrete-state models to connect them and explore their scalability in the vision domain. First, we conduct a step-by-step analysis in a unified design space across two types of models including timestep-independence, noise schedule, temperature, guidance strength, etc in a scalable manner. Second, we re-cast typical discriminative tasks, e.g., image segmentation, as an unmasking process from [MASK] tokens on a discrete-state model. This enables us to perform various sampling processes, including flexible conditional sampling by only training once to model the joint distribution. All aforementioned explorations lead to our framework named Discrete Interpolants, which enables us to achieve state-of-the-art or competitive performance compared to previous discrete-state based methods in various benchmarks, like ImageNet256, MS COCO, and video dataset FaceForensics. In summary, by leveraging [MASK] in discrete-state models, we can bridge Masked Generative and Non-autoregressive Diffusion models, as well as generative and discriminative tasks.
翻译:在生成模型中,两种范式已在多种应用中受到关注:基于下一集合预测的掩码生成模型和基于下一噪声预测的非自回归模型(如扩散模型)。本研究提出使用离散状态模型连接这两种范式,并探索其在视觉领域的可扩展性。首先,我们在统一设计空间中对两类模型进行逐步分析,包括时间步独立性、噪声调度、温度参数、引导强度等,并以可扩展的方式展开研究。其次,我们将典型的判别式任务(如图像分割)重新定义为离散状态模型上基于[MASK]标记的解掩码过程。这使我们能够执行多种采样过程,包括仅通过一次训练建模联合分布即可实现的灵活条件采样。上述所有探索构成了我们提出的离散插值框架,该框架使我们在ImageNet256、MS COCO和视频数据集FaceForensics等多种基准测试中,相比以往基于离散状态的方法取得了最优或具有竞争力的性能。总之,通过利用离散状态模型中的[MASK]机制,我们能够连接掩码生成模型与非自回归扩散模型,并统一生成式与判别式任务。