We introduce a rehearsal-free federated domain incremental learning framework, RefFiL, based on a global prompt-sharing paradigm to alleviate catastrophic forgetting challenges in federated domain-incremental learning, where unseen domains are continually learned. Typical methods for mitigating forgetting, such as the use of additional datasets and the retention of private data from earlier tasks, are not viable in federated learning (FL) due to devices' limited resources. Our method, RefFiL, addresses this by learning domain-invariant knowledge and incorporating various domain-specific prompts from the domains represented by different FL participants. A key feature of RefFiL is the generation of local fine-grained prompts by our domain adaptive prompt generator, which effectively learns from local domain knowledge while maintaining distinctive boundaries on a global scale. We also introduce a domain-specific prompt contrastive learning loss that differentiates between locally generated prompts and those from other domains, enhancing RefFiL's precision and effectiveness. Compared to existing methods, RefFiL significantly alleviates catastrophic forgetting without requiring extra memory space, making it ideal for privacy-sensitive and resource-constrained devices.
翻译:本文提出了一种基于全局提示共享范式的无需排练联邦域增量学习框架RefFiL,以缓解联邦域增量学习中的灾难性遗忘问题,该场景下模型需持续学习未见过的领域。由于设备资源有限,在联邦学习(FL)环境中,典型的缓解遗忘方法(如使用额外数据集或保留早期任务的私有数据)并不可行。我们的方法RefFiL通过以下方式解决这一问题:学习领域不变知识,并整合来自不同FL参与者所代表领域的多种领域特定提示。RefFiL的一个关键特征是通过领域自适应提示生成器生成局部细粒度提示,该生成器能有效学习局部领域知识,同时在全局范围内保持清晰的领域边界。我们还引入了领域特定提示对比学习损失函数,以区分本地生成的提示与其他领域的提示,从而提升RefFiL的精度与有效性。与现有方法相比,RefFiL在无需额外存储空间的条件下显著缓解了灾难性遗忘,使其特别适用于注重隐私且资源受限的设备。