Federated learning (FL) has become a cornerstone in decentralized learning, where, in many scenarios, the incoming data distribution will change dynamically over time, introducing continuous learning (CL) problems. This continual federated learning (CFL) task presents unique challenges, particularly regarding catastrophic forgetting and non-IID input data. Existing solutions include using a replay buffer to store historical data or leveraging generative adversarial networks. Nevertheless, motivated by recent advancements in the diffusion model for generative tasks, this paper introduces DCFL, a novel framework tailored to address the challenges of CFL in dynamic distributed learning environments. Our approach harnesses the power of the conditional diffusion model to generate synthetic historical data at each local device during communication, effectively mitigating latent shifts in dynamic data distribution inputs. We provide the convergence bound for the proposed CFL framework and demonstrate its promising performance across multiple datasets, showcasing its effectiveness in tackling the complexities of CFL tasks.
翻译:联邦学习(FL)已成为去中心化学习的基石,在许多场景中,输入的数据分布会随时间动态变化,从而引入了持续学习(CL)问题。这一持续联邦学习(CFL)任务带来了独特的挑战,尤其是在灾难性遗忘和非独立同分布输入数据方面。现有的解决方案包括使用回放缓冲区存储历史数据,或利用生成对抗网络。然而,受近期扩散模型在生成任务中进展的启发,本文提出了DCFL,一个专为应对动态分布式学习环境中CFL挑战而设计的新框架。我们的方法利用条件扩散模型的能力,在通信期间于每个本地设备生成合成历史数据,从而有效缓解动态数据分布输入中的潜在偏移。我们为所提出的CFL框架提供了收敛界,并在多个数据集上展示了其良好的性能,证明了其在应对CFL任务复杂性方面的有效性。