Diffusion Transformers (DiTs) introduce the transformer architecture to diffusion tasks for latent-space image generation. With an isotropic architecture that chains a series of transformer blocks, DiTs demonstrate competitive performance and good scalability; but meanwhile, the abandonment of U-Net by DiTs and their following improvements is worth rethinking. To this end, we conduct a simple toy experiment by comparing a U-Net architectured DiT with an isotropic one. It turns out that the U-Net architecture only gain a slight advantage amid the U-Net inductive bias, indicating potential redundancies within the U-Net-style DiT. Inspired by the discovery that U-Net backbone features are low-frequency-dominated, we perform token downsampling on the query-key-value tuple for self-attention that bring further improvements despite a considerable amount of reduction in computation. Based on self-attention with downsampled tokens, we propose a series of U-shaped DiTs (U-DiTs) in the paper and conduct extensive experiments to demonstrate the extraordinary performance of U-DiT models. The proposed U-DiT could outperform DiT-XL/2 with only 1/6 of its computation cost. Codes are available at https://github.com/YuchuanTian/U-DiT.
翻译:扩散变换器(DiTs)将变换器架构引入扩散任务,用于潜在空间图像生成。DiTs采用各向同性架构,串联一系列变换器块,展现出竞争性性能和良好的可扩展性;然而,DiTs及其后续改进对U-Net的摒弃值得重新思考。为此,我们通过比较U-Net架构的DiT与各向同性DiT进行了一个简单的实验。结果表明,U-Net架构仅在U-Net归纳偏置中获得微弱优势,表明U-Net风格DiT内部存在潜在冗余。受U-Net骨干特征以低频为主的发现启发,我们对自注意力中的查询-键-值元组执行令牌下采样,尽管计算量大幅减少,仍带来了进一步的改进。基于下采样令牌的自注意力,本文提出了一系列U形DiTs(U-DiTs),并通过大量实验证明了U-DiT模型的卓越性能。所提出的U-DiT仅需DiT-XL/2计算成本的1/6即可超越其性能。代码可在https://github.com/YuchuanTian/U-DiT获取。