Multimodal generative models that can understand and generate across multiple modalities are dominated by autoregressive (AR) approaches, which process tokens sequentially from left to right, or top to bottom. These models jointly handle images, text, video, and audio for various tasks such as image captioning, question answering, and image generation. In this work, we explore discrete diffusion models as a unified generative formulation in the joint text and image domain, building upon their recent success in text generation. Discrete diffusion models offer several advantages over AR models, including improved control over quality versus diversity of generated samples, the ability to perform joint multimodal inpainting (across both text and image domains), and greater controllability in generation through guidance. Leveraging these benefits, we present the first Unified Multimodal Discrete Diffusion (UniDisc) model which is capable of jointly understanding and generating text and images for a variety of downstream tasks. We compare UniDisc to multimodal AR models, performing a scaling analysis and demonstrating that UniDisc outperforms them in terms of both performance and inference-time compute, enhanced controllability, editability, inpainting, and flexible trade-off between inference time and generation quality. Code and additional visualizations are available at https://unidisc.github.io.
翻译:能够理解并跨多种模态生成的多模态生成模型目前主要由自回归方法主导,这些方法从左到右或从上到下顺序处理标记。这些模型联合处理图像、文本、视频和音频,用于图像描述、问答和图像生成等各种任务。在这项工作中,我们基于离散扩散模型在文本生成领域最近取得的成功,探索将其作为一种在联合文本与图像领域的统一生成框架。与自回归模型相比,离散扩散模型具有若干优势,包括:对生成样本的质量与多样性具有更好的控制能力;能够执行跨文本和图像领域的联合多模态修复;以及通过引导实现更强的生成可控性。利用这些优势,我们提出了首个统一多模态离散扩散模型,该模型能够联合理解并生成文本和图像,适用于多种下游任务。我们将UniDisc与多模态自回归模型进行比较,进行了扩展性分析,并证明UniDisc在性能与推理时间计算效率、增强的可控性、可编辑性、修复能力以及推理时间与生成质量之间的灵活权衡方面均优于后者。代码及更多可视化结果可在 https://unidisc.github.io 获取。