This paper introduces PipeFusion, a novel approach that harnesses multi-GPU parallelism to address the high computational and latency challenges of generating high-resolution images with diffusion transformers (DiT) models. PipeFusion splits images into patches and distributes the network layers across multiple devices. It employs a pipeline parallel manner to orchestrate communication and computations. By leveraging the high similarity between the input from adjacent diffusion steps, PipeFusion eliminates the waiting time in the pipeline by reusing the one-step stale feature maps to provide context for the current step. Our experiments demonstrate that it can generate higher image resolution where existing DiT parallel approaches meet OOM. PipeFusion significantly reduces the required communication bandwidth, enabling DiT inference to be hosted on GPUs connected via PCIe rather than the more costly NVLink infrastructure, which substantially lowers the overall operational expenses for serving DiT models. Our code is publicly available at https://github.com/PipeFusion/PipeFusion.
翻译:本文提出PipeFusion,一种利用多GPU并行性来解决使用扩散Transformer(DiT)模型生成高分辨率图像时面临的高计算量与高延迟挑战的新方法。PipeFusion将图像分割为补丁,并将网络层分布到多个设备上。它采用流水线并行的方式协调通信与计算。通过利用相邻扩散步骤间输入的高度相似性,PipeFusion通过重用一步滞后的特征图为当前步骤提供上下文,从而消除了流水线中的等待时间。我们的实验表明,在现有DiT并行方法因内存不足(OOM)而受限的场景下,PipeFusion能够生成更高分辨率的图像。该方法显著降低了所需的通信带宽,使得DiT推理可以部署在通过PCIe连接的GPU上,而无需依赖成本更高的NVLink基础设施,这大幅降低了部署DiT模型的整体运营成本。我们的代码公开于https://github.com/PipeFusion/PipeFusion。