Class-Incremental Learning (CIL) requires models to continually acquire knowledge of new classes without forgetting old ones. Despite Pre-trained Models (PTMs) have shown excellent performance in CIL, catastrophic forgetting still occurs as the model learns new concepts. Existing work seeks to utilize lightweight components to adjust the PTM, while the forgetting phenomenon still comes from {\em parameter and retrieval} levels. Specifically, iterative updates of the model result in parameter drift, while mistakenly retrieving irrelevant modules leads to the mismatch during inference. To this end, we propose MOdel Surgery (MOS) to rescue the model from forgetting previous knowledge. By training task-specific adapters, we continually adjust the PTM to downstream tasks. To mitigate parameter-level forgetting, we present an adapter merging approach to learn task-specific adapters, which aims to bridge the gap between different components while reserve task-specific information. Besides, to address retrieval-level forgetting, we introduce a training-free self-refined adapter retrieval mechanism during inference, which leverages the model's inherent ability for better adapter retrieval. By jointly rectifying the model with those steps, MOS can robustly resist catastrophic forgetting in the learning process. Extensive experiments on seven benchmark datasets validate MOS's state-of-the-art performance. Code is available at: https://github.com/sun-hailong/AAAI25-MOS
翻译:类增量学习(CIL)要求模型能够持续学习新类别的知识,同时不遗忘旧类别。尽管预训练模型(PTMs)在CIL中已展现出优异性能,但在模型学习新概念时,灾难性遗忘仍然发生。现有工作试图利用轻量级组件来调整PTM,然而遗忘现象仍源于{\em 参数与检索}两个层面。具体而言,模型的迭代更新导致参数漂移,而错误检索无关模块则会在推理过程中引发失配。为此,我们提出模型外科手术(MOS),以挽救模型对先前知识的遗忘。通过训练任务特定的适配器,我们持续将PTM调整至下游任务。为减轻参数层面的遗忘,我们提出一种适配器融合方法以学习任务特定的适配器,旨在弥合不同组件间的差距,同时保留任务特定信息。此外,为应对检索层面的遗忘,我们在推理阶段引入一种无需训练的自优化适配器检索机制,该机制利用模型的内在能力以实现更优的适配器检索。通过联合实施上述步骤对模型进行修正,MOS能够在学习过程中稳健地抵抗灾难性遗忘。在七个基准数据集上的大量实验验证了MOS的先进性能。代码发布于:https://github.com/sun-hailong/AAAI25-MOS