In the landscape of generative artificial intelligence, diffusion-based models have emerged as a promising method for generating synthetic images. However, the application of diffusion models poses numerous challenges, particularly concerning data availability, computational requirements, and privacy. Traditional approaches to address these shortcomings, like federated learning, often impose significant computational burdens on individual clients, especially those with constrained resources. In response to these challenges, we introduce a novel approach for distributed collaborative diffusion models inspired by split learning. Our approach facilitates collaborative training of diffusion models while alleviating client computational burdens during image synthesis. This reduced computational burden is achieved by retaining data and computationally inexpensive processes locally at each client while outsourcing the computationally expensive processes to shared, more efficient server resources. Through experiments on the common CelebA dataset, our approach demonstrates enhanced privacy by reducing the necessity for sharing raw data. These capabilities hold significant potential across various application areas, including the design of edge computing solutions. Thus, our work advances distributed machine learning by contributing to the evolution of collaborative diffusion models.
翻译:在生成式人工智能领域中,基于扩散的模型已成为生成合成图像的一种有前景的方法。然而,扩散模型的应用带来了诸多挑战,尤其是在数据可用性、计算需求和隐私方面。解决这些缺点的传统方法,如联邦学习,通常会给各个客户端(特别是资源受限的客户端)带来沉重的计算负担。针对这些挑战,我们提出了一种受分割学习启发的分布式协作扩散模型新方法。我们的方法促进了扩散模型的协作训练,同时减轻了客户端在图像合成过程中的计算负担。这种计算负担的减轻是通过将数据和计算成本较低的过程保留在每个客户端本地,而将计算成本高昂的过程外包给共享的、更高效的服务器资源来实现的。通过在常见的CelebA数据集上进行实验,我们的方法通过减少共享原始数据的必要性,展示了增强的隐私保护能力。这些能力在包括边缘计算解决方案设计在内的各种应用领域中具有巨大潜力。因此,我们的工作通过推动协作扩散模型的发展,推进了分布式机器学习领域。