Diffusion Transformer (DiT), an emerging diffusion model for image generation, has demonstrated superior performance but suffers from substantial computational costs. Our investigations reveal that these costs stem from the static inference paradigm, which inevitably introduces redundant computation in certain diffusion timesteps and spatial regions. To address this inefficiency, we propose Dynamic Diffusion Transformer (DyDiT), an architecture that dynamically adjusts its computation along both timestep and spatial dimensions during generation. Specifically, we introduce a Timestep-wise Dynamic Width (TDW) approach that adapts model width conditioned on the generation timesteps. In addition, we design a Spatial-wise Dynamic Token (SDT) strategy to avoid redundant computation at unnecessary spatial locations. Extensive experiments on various datasets and different-sized models verify the superiority of DyDiT. Notably, with <3% additional fine-tuning iterations, our method reduces the FLOPs of DiT-XL by 51%, accelerates generation by 1.73, and achieves a competitive FID score of 2.07 on ImageNet. The code is publicly available at https://github.com/NUS-HPC-AI-Lab/ Dynamic-Diffusion-Transformer.
翻译:扩散Transformer(DiT)作为一种新兴的图像生成扩散模型,已展现出卓越性能,但其计算成本高昂。我们的研究发现,这些成本源于静态推理范式,该范式不可避免地会在某些扩散时间步和空间区域引入冗余计算。为解决这一效率问题,我们提出了动态扩散Transformer(DyDiT),一种在生成过程中沿时间步和空间维度动态调整计算的架构。具体而言,我们引入了时间步动态宽度(TDW)方法,可根据生成时间步自适应调整模型宽度。此外,我们设计了空间动态令牌(SDT)策略,以避免在非必要空间位置进行冗余计算。在不同数据集和不同规模模型上的大量实验验证了DyDiT的优越性。值得注意的是,在仅增加<3%微调迭代次数的条件下,我们的方法将DiT-XL的FLOPs降低了51%,生成速度提升了1.73倍,并在ImageNet上取得了2.07的竞争性FID分数。代码已公开于https://github.com/NUS-HPC-AI-Lab/Dynamic-Diffusion-Transformer。