In continual learning (CL), catastrophic forgetting often arises due to feature drift. This challenge is particularly prominent in the exemplar-free continual learning (EFCL) setting, where samples from previous tasks cannot be retained, making it difficult to preserve prior knowledge. To address this issue, some EFCL methods aim to identify feature spaces that minimize the impact on previous tasks while accommodating new ones. However, they rely on static features or outdated statistics stored from old tasks, which prevents them from capturing the dynamic evolution of the feature space in CL, leading to performance degradation over time. In this paper, we introduce the Drift-Resistant Space (DRS), which effectively handles feature drifts without requiring explicit feature modeling or the storage of previous tasks. A novel parameter-efficient fine-tuning approach called Low-Rank Adaptation Subtraction (LoRA-) is proposed to develop the DRS. This method subtracts the LoRA weights of old tasks from the initial pre-trained weight before processing new task data to establish the DRS for model training. Therefore, LoRA- enhances stability, improves efficiency, and simplifies implementation. Furthermore, stabilizing feature drifts allows for better plasticity by learning with a triplet loss. Our method consistently achieves state-of-the-art results, especially for long task sequences, across multiple datasets.
翻译:在持续学习(CL)中,特征漂移常导致灾难性遗忘。这一挑战在无示例持续学习(EFCL)设定中尤为突出,由于无法保留先前任务的样本,使得保存先验知识变得困难。为解决此问题,一些EFCL方法致力于识别能够最小化对先前任务影响、同时适应新任务的特征空间。然而,这些方法依赖于静态特征或从旧任务存储的过时统计量,无法捕捉CL中特征空间的动态演化,导致性能随时间下降。本文提出抗漂移空间(DRS),该空间能有效处理特征漂移,且无需显式特征建模或存储先前任务。为构建DRS,我们提出一种新颖的参数高效微调方法——低秩自适应减法(LoRA-)。该方法在处理新任务数据前,从初始预训练权重中减去旧任务的LoRA权重,从而建立用于模型训练的DRS。因此,LoRA-增强了稳定性,提高了效率,并简化了实现。此外,通过稳定特征漂移,结合三元组损失进行学习可提升模型可塑性。我们的方法在多个数据集上,尤其是长任务序列场景中,持续取得了最先进的性能。