Diffusion models have garnered significant interest from the community for their great generative ability across various applications. However, their typical multi-step sequential-denoising nature gives rise to high cumulative latency, thereby precluding the possibilities of parallel computation. To address this, we introduce AsyncDiff, a universal and plug-and-play acceleration scheme that enables model parallelism across multiple devices. Our approach divides the cumbersome noise prediction model into multiple components, assigning each to a different device. To break the dependency chain between these components, it transforms the conventional sequential denoising into an asynchronous process by exploiting the high similarity between hidden states in consecutive diffusion steps. Consequently, each component is facilitated to compute in parallel on separate devices. The proposed strategy significantly reduces inference latency while minimally impacting the generative quality. Specifically, for the Stable Diffusion v2.1, AsyncDiff achieves a 2.7x speedup with negligible degradation and a 4.0x speedup with only a slight reduction of 0.38 in CLIP Score, on four NVIDIA A5000 GPUs. Our experiments also demonstrate that AsyncDiff can be readily applied to video diffusion models with encouraging performances. The code is available at https://github.com/czg1225/AsyncDiff.
翻译:扩散模型因其在各种应用中的卓越生成能力而受到学术界广泛关注。然而,其典型的多步顺序去噪特性导致较高的累积延迟,从而阻碍了并行计算的可能性。为解决这一问题,我们提出了AsyncDiff,一种通用即插即用的加速方案,能够在多个设备上实现模型并行。我们的方法将庞大的噪声预测模型划分为多个组件,并将每个组件分配到不同的设备上。为打破这些组件之间的依赖链,该方法通过利用连续扩散步之间隐藏状态的高度相似性,将传统的顺序去噪转变为异步过程。因此,每个组件得以在独立设备上并行计算。所提出的策略在显著降低推理延迟的同时,对生成质量的影响极小。具体而言,对于Stable Diffusion v2.1,在四张NVIDIA A5000 GPU上,AsyncDiff实现了2.7倍的加速且性能下降可忽略不计,以及在CLIP分数仅轻微下降0.38的情况下实现4.0倍的加速。我们的实验还表明,AsyncDiff可以轻松应用于视频扩散模型,并取得令人鼓舞的性能。代码可在https://github.com/czg1225/AsyncDiff获取。