Diffusion transformer (DiT) achieves remarkable performance in visual generation, but its iterative denoising process combined with larger capacity leads to a high inference cost. Recent works have demonstrated that the iterative denoising process of DiT models involves substantial redundant computation across steps. To effectively reduce the redundant computation in DiT, we propose CorGi (Contribution-Guided Block-Wise Interval Caching), training-free DiT inference acceleration framework that selectively reuses the outputs of transformer blocks in DiT across denoising steps. CorGi caches low-contribution blocks and reuses them in later steps within each interval to reduce redundant computation while preserving generation quality. For text-to-image tasks, we further propose CorGi+, which leverages per-block cross-attention maps to identify salient tokens and applies partial attention updates to protect important object details. Evaluation on the state-of-the-art DiT models demonstrates that CorGi and CorGi+ achieve up to 2.0x speedup on average, while preserving high generation quality.
翻译:扩散Transformer(DiT)在视觉生成领域取得了卓越的性能,但其迭代去噪过程与较大的模型容量相结合,导致了高昂的推理成本。近期研究表明,DiT模型的迭代去噪过程在多个步骤间存在大量冗余计算。为有效减少DiT中的冗余计算,我们提出了CorGi(贡献度引导分块间隔缓存),一种无需训练的DiT推理加速框架,该框架选择性地跨去噪步骤复用DiT中Transformer块的输出。CorGi缓存低贡献度的块,并在每个间隔内的后续步骤中复用它们,从而在保持生成质量的同时减少冗余计算。针对文生图任务,我们进一步提出了CorGi+,它利用逐块交叉注意力图来识别显著标记,并应用部分注意力更新以保护重要的物体细节。在先进的DiT模型上的评估表明,CorGi和CorGi+平均可实现高达2.0倍的加速,同时保持高生成质量。