Diffusion models with their powerful expressivity and high sample quality have achieved State-Of-The-Art (SOTA) performance in the generative domain. The pioneering Vision Transformer (ViT) has also demonstrated strong modeling capabilities and scalability, especially for recognition tasks. In this paper, we study the effectiveness of ViTs in diffusion-based generative learning and propose a new model denoted as Diffusion Vision Transformers (DiffiT). Specifically, we propose a methodology for finegrained control of the denoising process and introduce the Time-dependant Multihead Self Attention (TMSA) mechanism. DiffiT is surprisingly effective in generating high-fidelity images with significantly better parameter efficiency. We also propose latent and image space DiffiT models and show SOTA performance on a variety of class-conditional and unconditional synthesis tasks at different resolutions. The Latent DiffiT model achieves a new SOTA FID score of 1.73 on ImageNet256 dataset while having 19.85%, 16.88% less parameters than other Transformer-based diffusion models such as MDT and DiT,respectively. Code: https://github.com/NVlabs/DiffiT
翻译:扩散模型凭借其强大的表达能力和高样本质量,已在生成领域实现了最先进的性能。开创性的视觉Transformer也展现出强大的建模能力和可扩展性,尤其在识别任务中表现突出。本文研究了视觉Transformer在基于扩散的生成学习中的有效性,并提出了一种称为扩散视觉Transformer的新模型。具体而言,我们提出了一种对去噪过程进行细粒度控制的方法,并引入了时间依赖多头自注意力机制。DiffiT在生成高保真图像方面表现出惊人的有效性,且参数量效率显著更优。我们还提出了潜在空间和图像空间的DiffiT模型,并在不同分辨率的各类条件生成和无条件合成任务中展示了最先进的性能。潜在空间DiffiT模型在ImageNet256数据集上取得了1.73的最新最优FID分数,同时参数量分别比MDT和DiT等其他基于Transformer的扩散模型减少了19.85%和16.88%。代码:https://github.com/NVlabs/DiffiT