We introduce a novel framework for Federated Class Incremental Learning, called Federated Gaussian Task Embedding and Alignment (FedGTEA). FedGTEA is designed to capture task-specific knowledge and model uncertainty in a scalable and communication-efficient manner. At the client side, the Cardinality-Agnostic Task Encoder (CATE) produces Gaussian-distributed task embeddings that encode task knowledge, address statistical heterogeneity, and quantify data uncertainty. Importantly, CATE maintains a fixed parameter size regardless of the number of tasks, which ensures scalability across long task sequences. On the server side, FedGTEA utilizes the 2-Wasserstein distance to measure inter-task gaps between Gaussian embeddings. We formulate the Wasserstein loss to enforce inter-task separation. This probabilistic formulation not only enhances representation learning but also preserves task-level privacy by avoiding the direct transmission of latent embeddings, aligning with the privacy constraints in federated learning. Extensive empirical evaluations on popular datasets demonstrate that FedGTEA achieves superior classification performance and significantly mitigates forgetting, consistently outperforming strong existing baselines.
翻译:本文提出了一种新颖的联邦类增量学习框架,称为联邦高斯任务嵌入与对齐(FedGTEA)。FedGTEA旨在以可扩展且通信高效的方式捕获任务特定知识并建模不确定性。在客户端,基数无关任务编码器(CATE)生成符合高斯分布的任务嵌入,这些嵌入编码了任务知识、处理统计异质性并量化数据不确定性。重要的是,无论任务数量如何变化,CATE始终保持固定的参数量,这确保了其在长任务序列上的可扩展性。在服务器端,FedGTEA利用2-Wasserstein距离度量高斯嵌入间的任务间差异。我们构建了Wasserstein损失函数以增强任务间分离性。这种概率化表述不仅提升了表示学习能力,还通过避免直接传输潜在嵌入来保护任务级隐私,符合联邦学习中的隐私约束要求。在多个主流数据集上的大量实验评估表明,FedGTEA实现了卓越的分类性能,显著缓解了遗忘问题,并持续优于现有强基线方法。