Continual Federated Learning (CFL) allows distributed devices to collaboratively learn novel concepts from continuously shifting training data while avoiding knowledge forgetting of previously seen tasks. To tackle this challenge, most current CFL approaches rely on extensive rehearsal of previous data. Despite effectiveness, rehearsal comes at a cost to memory, and it may also violate data privacy. Considering these, we seek to apply regularization techniques to CFL by considering their cost-efficient properties that do not require sample caching or rehearsal. Specifically, we first apply traditional regularization techniques to CFL and observe that existing regularization techniques, especially synaptic intelligence, can achieve promising results under homogeneous data distribution but fail when the data is heterogeneous. Based on this observation, we propose a simple yet effective regularization algorithm for CFL named FedSSI, which tailors the synaptic intelligence for the CFL with heterogeneous data settings. FedSSI can not only reduce computational overhead without rehearsal but also address the data heterogeneity issue. Extensive experiments show that FedSSI achieves superior performance compared to state-of-the-art methods.
翻译:持续联邦学习(CFL)使得分布式设备能够从持续变化的训练数据中协作学习新概念,同时避免对先前已见任务的知识遗忘。为应对这一挑战,当前大多数CFL方法依赖于对历史数据进行大量回放。尽管有效,回放会带来内存开销,并可能违反数据隐私。考虑到这些因素,我们尝试将正则化技术应用于CFL,利用其无需样本缓存或回放的成本效益特性。具体而言,我们首先将传统正则化技术应用于CFL,发现现有正则化技术(尤其是突触智能)在数据分布同质化时能取得良好效果,但在数据异构时失效。基于此观察,我们提出了一种简单而有效的CFL正则化算法FedSSI,该算法针对异构数据场景下的CFL定制了突触智能机制。FedSSI不仅能通过免回放降低计算开销,还能解决数据异构性问题。大量实验表明,FedSSI相比现有先进方法实现了更优的性能。