Federated Continual Learning (FCL) leverages inter-client collaboration to balance new knowledge acquisition and prior knowledge retention in non-stationary data. However, existing batch-based FCL methods lack adaptability to streaming scenarios featuring category overlap between old and new data and absent task identifiers, leading to indistinguishability of old and new knowledge, uncertain task assignments for samples, and knowledge confusion.To address this, we propose streaming federated continual learning setting: per federated learning (FL) round, clients process streaming data with disjoint samples and potentially overlapping categories without task identifiers, necessitating sustained inference capability for all prior categories after each FL round.Next, we introduce FedKACE: 1) an adaptive inference model switching mechanism that enables unidirectional switching from local model to global model to achieve a trade-off between personalization and generalization; 2) a adaptive gradient-balanced replay scheme that reconciles new knowledge learning and old knowledge retention under overlapping-class scenarios; 3) a kernel spectral boundary buffer maintenance that preserves high-information and high-boundary-influence samples to optimize cross-round knowledge retention. Experiments across multiple scenarios and regret analysis demonstrate the effectiveness of FedKACE.
翻译:联邦持续学习(FCL)利用客户端间的协作,在非平稳数据中平衡新知识获取与旧知识保留。然而,现有的基于批处理的FCL方法缺乏对以下流式场景的适应性:新旧数据间存在类别重叠且缺乏任务标识,这导致新旧知识难以区分、样本的任务分配不确定以及知识混淆。为解决此问题,我们提出流式联邦持续学习设定:在每一轮联邦学习(FL)中,客户端处理具有不相交样本、可能存在类别重叠且无任务标识的流式数据,并要求在每轮FL后对所有先前类别保持持续的推理能力。接着,我们提出FedKACE方法:1)一种自适应推理模型切换机制,支持从本地模型到全局模型的单向切换,以实现个性化与泛化性的权衡;2)一种自适应梯度平衡回放方案,在类别重叠场景下协调新知识学习与旧知识保留;3)一种核谱边界缓冲区维护策略,保留高信息量和高边界影响力的样本,以优化跨轮次的知识保留。在多种场景下的实验及遗憾分析证明了FedKACE的有效性。