In federated healthcare systems, Federated Class-Incremental Learning (FCIL) has emerged as a key paradigm, enabling continuous adaptive model learning among distributed clients while safeguarding data privacy. However, in practical applications, data across agent nodes within the distributed framework often exhibits non-independent and identically distributed (non-IID) characteristics, rendering traditional continual learning methods inapplicable. To address these challenges, this paper covers more comprehensive incremental task scenarios and proposes a dynamic memory allocation strategy for exemplar storage based on the data replay mechanism. This strategy fully taps into the inherent potential of data heterogeneity, while taking into account the performance fairness of all participating clients, thereby establishing a balanced and adaptive solution to mitigate catastrophic forgetting. Unlike the fixed allocation of client exemplar memory, the proposed scheme emphasizes the rational allocation of limited storage resources among clients to improve model performance. Furthermore, extensive experiments are conducted on three medical image datasets, and the results demonstrate significant performance improvements compared to existing baseline models.
翻译:在联邦医疗系统中,联邦类别增量学习已成为一种关键范式,能够在保障数据隐私的同时,实现分布式客户端间的持续自适应模型学习。然而,实际应用中分布式框架内各代理节点上的数据通常呈现非独立同分布特征,导致传统增量学习方法难以适用。为应对这些挑战,本文覆盖了更全面的增量任务场景,并基于数据重放机制提出了一种面向样本存储的动态内存分配策略。该策略充分挖掘数据异质性的内在潜力,同时兼顾所有参与客户端的性能公平性,从而建立平衡且自适应的解决方案以缓解灾难性遗忘。与客户端样本内存固定分配不同,所提方案强调在客户端间合理分配有限的存储资源以提升模型性能。此外,本文在三个医学图像数据集上开展了大量实验,结果表明与现有基线模型相比,所提方法取得了显著的性能提升。