Continual learning has emerged as a pivotal area of research, primarily due to its advantageous characteristic that allows models to persistently acquire and retain information. However, catastrophic forgetting can severely impair model performance. In this study, we address network forgetting by introducing a novel framework termed Optimally-Weighted Maximum Mean Discrepancy (OWMMD), which imposes penalties on representation alterations via a Multi-Level Feature Matching Mechanism (MLFMM). Furthermore, we propose an Adaptive Regularization Optimization (ARO) strategy to refine the adaptive weight vectors, which autonomously assess the significance of each feature layer throughout the optimization process, The proposed ARO approach can relieve the over-regularization problem and promote the future task learning. We conduct a comprehensive series of experiments, benchmarking our proposed method against several established baselines. The empirical findings indicate that our approach achieves state-of-the-art performance.
翻译:持续学习已成为一个关键的研究领域,主要得益于其允许模型持续获取并保留信息的优势特性。然而,灾难性遗忘会严重损害模型性能。在本研究中,我们通过引入一种称为最优加权最大均值差异(OWMMD)的新框架来解决网络遗忘问题,该框架通过多级特征匹配机制(MLFMM)对表征变化施加惩罚。此外,我们提出了一种自适应正则化优化(ARO)策略来优化自适应权重向量,该策略能够在整个优化过程中自主评估每个特征层的重要性。所提出的ARO方法能够缓解过正则化问题并促进未来任务的学习。我们进行了一系列全面的实验,将所提方法与多个成熟基线进行了性能比较。实证结果表明,我们的方法实现了最先进的性能。