Class-incremental learning (CIL) aims to enable models to continuously learn new classes while overcoming catastrophic forgetting. The introduction of pre-trained models has brought new tuning paradigms to CIL. In this paper, we revisit different parameter-efficient tuning (PET) methods within the context of continual learning. We observe that adapter tuning demonstrates superiority over prompt-based methods, even without parameter expansion in each learning session. Motivated by this, we propose incrementally tuning the shared adapter without imposing parameter update constraints, enhancing the learning capacity of the backbone. Additionally, we employ feature sampling from stored prototypes to retrain a unified classifier, further improving its performance. We estimate the semantic shift of old prototypes without access to past samples and update stored prototypes session by session. Our proposed method eliminates model expansion and avoids retaining any image samples. It surpasses previous pre-trained model-based CIL methods and demonstrates remarkable continual learning capabilities. Experimental results on five CIL benchmarks validate the effectiveness of our approach, achieving state-of-the-art (SOTA) performance.
翻译:类增量学习(CIL)旨在使模型能够持续学习新类别,同时克服灾难性遗忘。预训练模型的引入为CIL带来了新的调优范式。本文在持续学习背景下重新审视了不同的参数高效调优(PET)方法。我们观察到,即使在每个学习阶段不扩展参数的情况下,适配器调优也表现出优于基于提示的方法的性能。受此启发,我们提出对共享适配器进行增量调优,而不施加参数更新约束,从而增强主干网络的学习能力。此外,我们利用存储的原型进行特征采样来重新训练统一分类器,进一步提升其性能。我们在无需访问历史样本的情况下估计旧原型的语义偏移,并逐阶段更新存储的原型。我们提出的方法消除了模型扩展需求,且无需保留任何图像样本。该方法超越了以往基于预训练模型的CIL方法,并展现出卓越的持续学习能力。在五个CIL基准测试上的实验结果验证了我们方法的有效性,实现了最先进的(SOTA)性能。