This paper focuses on Federated Domain-Incremental Learning (FDIL) where each client continues to learn incremental tasks where their domain shifts from each other. We propose a novel adaptive knowledge matching-based personalized FDIL approach (pFedDIL) which allows each client to alternatively utilize appropriate incremental task learning strategy on the correlation with the knowledge from previous tasks. More specifically, when a new task arrives, each client first calculates its local correlations with previous tasks. Then, the client can choose to adopt a new initial model or a previous model with similar knowledge to train the new task and simultaneously migrate knowledge from previous tasks based on these correlations. Furthermore, to identify the correlations between the new task and previous tasks for each client, we separately employ an auxiliary classifier to each target classification model and propose sharing partial parameters between the target classification model and the auxiliary classifier to condense model parameters. We conduct extensive experiments on several datasets of which results demonstrate that pFedDIL outperforms state-of-the-art methods by up to 14.35\% in terms of average accuracy of all tasks.
翻译:本文聚焦于联邦域增量学习(FDIL),其中每个客户端持续学习域间存在差异的增量任务。我们提出了一种新颖的基于自适应知识匹配的个性化FDIL方法(pFedDIL),该方法允许每个客户端根据其与先前任务知识的关联性,交替采用合适的增量任务学习策略。具体而言,当新任务到达时,每个客户端首先计算其与先前任务的局部关联度。随后,客户端可选择采用新的初始模型或具有相似知识的先前模型来训练新任务,并同时基于这些关联度迁移先前任务的知识。此外,为识别每个客户端的新任务与先前任务之间的关联性,我们分别为每个目标分类模型部署一个辅助分类器,并提出在目标分类模型与辅助分类器之间共享部分参数以压缩模型参数量。我们在多个数据集上进行了广泛实验,结果表明pFedDIL在所有任务的平均准确率上优于现有最优方法,最高提升达14.35%。