Diffusion Transformers (DiT) have emerged as a widely adopted backbone for high-fidelity image and video generation, yet their iterative denoising process incurs high computational costs. Existing training-free acceleration methods rely on feature caching and reuse under the assumption of temporal stability. However, reusing features for multiple steps may lead to latent drift and visual degradation. We observe that model outputs evolve smoothly along much of the diffusion trajectory, enabling principled predictions rather than naive reuse. Based on this insight, we propose \textbf{PrediT}, a training-free acceleration framework that formulates feature prediction as a linear multistep problem. We employ classical linear multistep methods to forecast future model outputs from historical information, combined with a corrector that activates in high-dynamics regions to prevent error accumulation. A dynamic step modulation mechanism adaptively adjusts the prediction horizon by monitoring the feature change rate. Together, these components enable substantial acceleration while preserving generation fidelity. Extensive experiments validate that our method achieves up to $5.54\times$ latency reduction across various DiT-based image and video generation models, while incurring negligible quality degradation.
翻译:扩散Transformer(DiT)已成为高保真图像和视频生成领域广泛采用的骨干网络,但其迭代去噪过程带来了高昂的计算成本。现有的免训练加速方法依赖于在时间稳定性假设下的特征缓存与重用。然而,对多个步骤重复使用特征可能导致潜在漂移和视觉质量下降。我们观察到,在扩散轨迹的大部分阶段,模型输出呈现平滑演化特性,这使得基于原理的预测成为可能,而非简单的特征重用。基于这一发现,我们提出了\textbf{PrediT}——一个免训练的加速框架,将特征预测构建为线性多步问题。我们采用经典线性多步方法,基于历史信息预测未来模型输出,并结合在动态剧烈区域激活的校正器以防止误差累积。动态步长调制机制通过监测特征变化率自适应调整预测范围。这些组件共同实现了在保持生成保真度的同时获得显著的加速效果。大量实验验证表明,我们的方法在多种基于DiT的图像和视频生成模型上实现了高达$5.54\times$的延迟降低,且质量损失可忽略不计。