In continual learning (CL), catastrophic forgetting often arises due to feature drift. This challenge is particularly prominent in the exemplar-free continual learning (EFCL) setting, where samples from previous tasks cannot be retained, making it difficult to preserve prior knowledge. To address this issue, some EFCL methods aim to identify feature spaces that minimize the impact on previous tasks while accommodating new ones. However, they rely on static features or outdated statistics stored from old tasks, which prevents them from capturing the dynamic evolution of the feature space in CL, leading to performance degradation over time. In this paper, we introduce the Drift-Resistant Space (DRS), which effectively handles feature drifts without requiring explicit feature modeling or the storage of previous tasks. A novel parameter-efficient fine-tuning approach called Low-Rank Adaptation Subtraction (LoRA-) is proposed to develop the DRS. This method subtracts the LoRA weights of old tasks from the initial pre-trained weight before processing new task data to establish the DRS for model training. Therefore, LoRA- enhances stability, improves efficiency, and simplifies implementation. Furthermore, stabilizing feature drifts allows for better plasticity by learning with a triplet loss. Our method consistently achieves state-of-the-art results, especially for long task sequences, across multiple datasets.
翻译:在持续学习(CL)中,特征漂移常导致灾难性遗忘。这一挑战在无示例持续学习(EFCL)设置中尤为突出,由于无法保留先前任务的样本,使得先验知识的维护变得困难。为解决此问题,部分EFCL方法致力于寻找能最小化对先前任务影响、同时适应新任务的特征空间。然而,这些方法依赖于静态特征或从旧任务存储的过时统计量,无法捕捉CL中特征空间的动态演化,导致性能随时间下降。本文提出抗漂移空间(DRS),该空间无需显式特征建模或存储历史任务即可有效处理特征漂移。我们提出一种称为低秩自适应减法(LoRA-)的新型参数高效微调方法以构建DRS。该方法在处理新任务数据前,从初始预训练权重中减去旧任务的LoRA权重,从而为模型训练建立DRS。因此,LoRA-增强了稳定性,提升了效率,并简化了实现流程。此外,通过三重损失学习稳定特征漂移可进一步提升模型可塑性。我们的方法在多个数据集上,尤其是长任务序列场景中,持续取得了最先进的性能。