Given the ability to model more realistic and dynamic problems, Federated Continual Learning (FCL) has been increasingly investigated recently. A well-known problem encountered in this setting is the so-called catastrophic forgetting, for which the learning model is inclined to focus on more recent tasks while forgetting the previously learned knowledge. The majority of the current approaches in FCL propose generative-based solutions to solve said problem. However, this setting requires multiple training epochs over the data, implying an offline setting where datasets are stored locally and remain unchanged over time. Furthermore, the proposed solutions are tailored for vision tasks solely. To overcome these limitations, we propose a new approach to deal with different modalities in the online scenario where new data arrive in streams of mini-batches that can only be processed once. To solve catastrophic forgetting, we propose an uncertainty-aware memory-based approach. Specifically, we suggest using an estimator based on the Bregman Information (BI) to compute the model's variance at the sample level. Through measures of predictive uncertainty, we retrieve samples with specific characteristics, and - by retraining the model on such samples - we demonstrate the potential of this approach to reduce the forgetting effect in realistic settings while maintaining data confidentiality and competitive communication efficiency compared to state-of-the-art approaches.
翻译:鉴于其能够建模更为真实和动态的问题,联邦持续学习近年来日益受到研究关注。在此设定中,一个众所周知的难题是所谓的灾难性遗忘,即学习模型倾向于聚焦于较新的任务,而遗忘先前习得的知识。当前大多数联邦持续学习方法提出了基于生成模型的解决方案来处理该问题。然而,这种设定要求对数据进行多轮训练,意味着一种离线场景,即数据集本地存储且随时间保持不变。此外,所提出的解决方案仅针对视觉任务定制。为克服这些限制,我们提出了一种新方法来处理在线场景下的多模态数据,其中新数据以仅能处理一次的微型批次流形式到达。为解决灾难性遗忘问题,我们提出了一种基于不确定性感知的内存管理方法。具体而言,我们建议使用基于布雷格曼信息的估计器来计算模型在样本层面的方差。通过预测不确定性的度量,我们检索具有特定特征的样本,并通过在此类样本上重新训练模型,我们证明了该方法在现实场景中减少遗忘效应的潜力,同时与现有先进方法相比,保持了数据机密性和具有竞争力的通信效率。