Given the ability to model more realistic and dynamic problems, Federated Continual Learning (FCL) has been increasingly investigated recently. A well-known problem encountered in this setting is the so-called catastrophic forgetting, for which the learning model is inclined to focus on more recent tasks while forgetting the previously learned knowledge. The majority of the current approaches in FCL propose generative-based solutions to solve said problem. However, this setting requires multiple training epochs over the data, implying an offline setting where datasets are stored locally and remain unchanged over time. Furthermore, the proposed solutions are tailored for vision tasks solely. To overcome these limitations, we propose a new modality-agnostic approach to deal with the online scenario where new data arrive in streams of mini-batches that can only be processed once. To solve catastrophic forgetting, we propose an uncertainty-aware memory-based approach. In particular, we suggest using an estimator based on the Bregman Information (BI) to compute the model's variance at the sample level. Through measures of predictive uncertainty, we retrieve samples with specific characteristics, and - by retraining the model on such samples - we demonstrate the potential of this approach to reduce the forgetting effect in realistic settings.
翻译:鉴于其能够建模更真实且动态的问题,联邦持续学习近年来日益受到研究关注。在此设定中,一个众所周知的难题是所谓的灾难性遗忘,即学习模型倾向于聚焦于近期任务而遗忘先前习得的知识。当前多数联邦持续学习方法提出基于生成式模型的解决方案以应对此问题。然而,该设定要求对数据进行多轮训练,这意味着需要离线环境——数据集需本地存储且长期保持不变。此外,现有解决方案仅针对视觉任务设计。为突破这些限制,我们提出一种新的模态无关方法,用于处理在线场景下数据以微型批次流形式到达且仅能处理一次的情况。为解决灾难性遗忘问题,我们提出一种基于不确定性感知的记忆机制。具体而言,我们建议采用基于布雷格曼信息的估计器来计算模型在样本层面的方差。通过预测不确定性的度量,我们提取具有特定特征的样本,并通过对这些样本进行模型重训练,证明了该方法在现实场景中减轻遗忘效应的潜力。