We introduce FedDM, a novel training framework designed for the federated training of diffusion models. Our theoretical analysis establishes the convergence of diffusion models when trained in a federated setting, presenting the specific conditions under which this convergence is guaranteed. We propose a suite of training algorithms that leverage the U-Net architecture as the backbone for our diffusion models. These include a basic Federated Averaging variant, FedDM-vanilla, FedDM-prox to handle data heterogeneity among clients, and FedDM-quant, which incorporates a quantization module to reduce the model update size, thereby enhancing communication efficiency across the federated network. We evaluate our algorithms on FashionMNIST (28x28 resolution), CIFAR-10 (32x32 resolution), and CelebA (64x64 resolution) for DDPMs, as well as LSUN Church Outdoors (256x256 resolution) for LDMs, focusing exclusively on the imaging modality. Our evaluation results demonstrate that FedDM algorithms maintain high generation quality across image resolutions. At the same time, the use of quantized updates and proximal terms in the local training objective significantly enhances communication efficiency (up to 4x) and model convergence, particularly in non-IID data settings, at the cost of increased FID scores (up to 1.75x).
翻译:本文提出FedDM,一种专为扩散模型联邦训练设计的新型训练框架。我们的理论分析确立了扩散模型在联邦设置下训练的收敛性,并给出了保证该收敛性的具体条件。我们提出了一套训练算法,以U-Net架构作为扩散模型的主干网络。这些算法包括基础联邦平均变体FedDM-vanilla、用于处理客户端间数据异构性的FedDM-prox,以及引入量化模块以减少模型更新大小的FedDM-quant,从而提升联邦网络中的通信效率。我们在FashionMNIST(28×28分辨率)、CIFAR-10(32×32分辨率)和CelebA(64×64分辨率)数据集上评估DDPMs,并在LSUN Church Outdoors(256×256分辨率)数据集上评估LDMs,研究范围仅限于图像模态。评估结果表明,FedDM算法在不同图像分辨率下均能保持较高的生成质量。同时,在局部训练目标中使用量化更新和近端项显著提升了通信效率(最高达4倍)和模型收敛性,特别是在非独立同分布数据设置中,但代价是FID分数有所上升(最高达1.75倍)。