The holy grail of machine learning is to enable Continual Federated Learning (CFL) to enhance the efficiency, privacy, and scalability of AI systems while learning from streaming data. The primary challenge of a CFL system is to overcome global catastrophic forgetting, wherein the accuracy of the global model trained on new tasks declines on the old tasks. In this work, we propose Continual Federated Learning with Aggregated Gradients (C-FLAG), a novel replay-memory based federated strategy consisting of edge-based gradient updates on memory and aggregated gradients on the current data. We provide convergence analysis of the C-FLAG approach which addresses forgetting and bias while converging at a rate of $O(1/\sqrt{T})$ over $T$ communication rounds. We formulate an optimization sub-problem that minimizes catastrophic forgetting, translating CFL into an iterative algorithm with adaptive learning rates that ensure seamless learning across tasks. We empirically show that C-FLAG outperforms several state-of-the-art baselines on both task and class-incremental settings with respect to metrics such as accuracy and forgetting.
翻译:机器学习的终极目标是实现持续联邦学习(CFL),以在从流数据中学习的同时提升人工智能系统的效率、隐私性和可扩展性。CFL系统面临的主要挑战是克服全局灾难性遗忘,即针对新任务训练的全局模型在旧任务上的准确率下降。本文提出基于聚合梯度的持续联邦学习(C-FLAG),这是一种基于回放记忆的新型联邦策略,包含基于边缘设备的记忆梯度更新与当前数据的聚合梯度处理。我们提供了C-FLAG方法的收敛性分析,该方法在$T$轮通信中以$O(1/\sqrt{T})$的速率收敛,同时解决了遗忘与偏差问题。我们构建了一个最小化灾难性遗忘的优化子问题,将CFL转化为具有自适应学习率的迭代算法,确保跨任务的无缝学习。实验表明,在任务增量与类增量场景下,C-FLAG在准确率与遗忘率等指标上均优于多种前沿基线方法。