Federated continual learning (FCL) aims to learn from sequential data stream in the decentralized federated learning setting, while simultaneously mitigating the catastrophic forgetting issue in classical continual learning. Existing FCL methods usually employ typical rehearsal mechanisms, which could result in privacy violations or additional onerous storage and computational burdens. In this work, an efficient and non-IID robust federated continual learning framework, called Federated Prototype-Augmented Prompt Learning (FPPL), is proposed. The FPPL can collaboratively learn lightweight prompts augmented by prototypes without rehearsal. On the client side, a fusion function is employed to fully leverage the knowledge contained in task-specific prompts for alleviating catastrophic forgetting. Additionally, global prototypes aggregated from the server are used to obtain unified representation through contrastive learning, mitigating the impact of non-IID-derived data heterogeneity. On the server side, locally uploaded prototypes are utilized to perform debiasing on the classifier, further alleviating the performance degradation caused by both non-IID and catastrophic forgetting. Empirical evaluations demonstrate the effectiveness of FPPL, achieving notable performance with an efficient design while remaining robust to diverse non-IID degrees. Code is available at: https://github.com/ycheoo/FPPL.
翻译:联邦持续学习(FCL)旨在去中心化的联邦学习设置中从顺序数据流中学习,同时缓解经典持续学习中的灾难性遗忘问题。现有的FCL方法通常采用典型的排练机制,这可能导致隐私泄露或额外的繁重存储与计算负担。本文提出了一种高效且对非独立同分布(non-IID)数据鲁棒的联邦持续学习框架,称为联邦原型增强提示学习(FPPL)。FPPL能够在不进行排练的情况下,协作学习由原型增强的轻量级提示。在客户端,采用融合函数充分利用任务特定提示中包含的知识以缓解灾难性遗忘。此外,通过使用从服务器聚合的全局原型,借助对比学习获得统一表示,从而减轻非独立同分布数据异质性带来的影响。在服务器端,利用客户端上传的本地原型对分类器执行去偏操作,进一步缓解由非独立同分布和灾难性遗忘共同导致的性能下降。实证评估证明了FPPL的有效性,其以高效的设计实现了显著的性能,同时对不同程度的非独立同分布数据保持鲁棒性。代码发布于:https://github.com/ycheoo/FPPL。