Diffusion Transformers (DiTs) have achieved state-of-the-art (SOTA) image generation quality but suffer from high latency and memory inefficiency, making them difficult to deploy on resource-constrained devices. One major efficiency bottleneck is that existing DiTs apply equal computation across all regions of an image. However, not all image tokens are equally important, and certain localized areas require more computation, such as objects. To address this, we propose DiffCR, a dynamic DiT inference framework with differentiable compression ratios, which automatically learns to dynamically route computation across layers and timesteps for each image token, resulting in efficient DiTs. Specifically, DiffCR integrates three features: (1) A token-level routing scheme where each DiT layer includes a router that is fine-tuned jointly with model weights to predict token importance scores. In this way, unimportant tokens bypass the entire layer's computation; (2) A layer-wise differentiable ratio mechanism where different DiT layers automatically learn varying compression ratios from a zero initialization, resulting in large compression ratios in redundant layers while others remain less compressed or even uncompressed; (3) A timestep-wise differentiable ratio mechanism where each denoising timestep learns its own compression ratio. The resulting pattern shows higher ratios for noisier timesteps and lower ratios as the image becomes clearer. Extensive experiments on text-to-image and inpainting tasks show that DiffCR effectively captures dynamism across token, layer, and timestep axes, achieving superior trade-offs between generation quality and efficiency compared to prior works. The project website is available at https://www.haoranyou.com/diffcr.
翻译:扩散Transformer(DiTs)已实现最先进的图像生成质量,但其高延迟和内存低效性使其难以部署在资源受限设备上。一个主要的效率瓶颈在于现有DiTs对图像所有区域施加均等计算量。然而,并非所有图像令牌都同等重要,某些局部区域(如物体)需要更多计算。为此,我们提出DiffCR——一种具有可微压缩比率的动态DiT推理框架,能自动学习为每个图像令牌在层间与时步间动态分配计算量,从而实现高效DiTs。具体而言,DiffCR集成三大特性:(1)令牌级路由方案:每个DiT层包含与模型权重联合微调的路由器,用于预测令牌重要性分数。不重要令牌可绕过整个层的计算;(2)层间可微比率机制:不同DiT层从零初始化自动学习差异化压缩比率,使冗余层获得高压缩比,而其他层保持较低压缩甚至不压缩;(3)时步可微比率机制:每个去噪时步学习其专属压缩比率。所得模式显示:噪声较强时步采用较高压缩比,随图像清晰化逐渐降低比率。在文本到图像和修复任务上的大量实验表明,DiffCR能有效捕捉令牌、层和时步维度的动态特性,相较于现有方法,在生成质量与效率间实现了更优权衡。项目网站详见 https://www.haoranyou.com/diffcr。