Class-Incremental Learning (CIL) requires a model to continually learn new classes without forgetting old ones. A common and efficient solution freezes a pre-trained model and employs lightweight adapters, whose parameters are often forced to be orthogonal to prevent inter-task interference. However, we argue that this parameter-constraining method is detrimental to plasticity. To this end, we propose Semantic-Guided Dynamic Sparsification (SGDS), a novel method that proactively guides the activation space by governing the orientation and rank of its subspaces through targeted sparsification. Specifically, SGDS promotes knowledge transfer by encouraging similar classes to share a compact activation subspace, while simultaneously preventing interference by assigning non-overlapping activation subspaces to dissimilar classes. By sculpting class-specific sparse subspaces in the activation space, SGDS effectively mitigates interference without imposing rigid constraints on the parameter space. Extensive experiments on various benchmark datasets demonstrate the state-of-the-art performance of SGDS.
翻译:类增量学习要求模型在不遗忘旧类别的情况下持续学习新类别。一种常见且高效的解决方案是冻结预训练模型并采用轻量级适配器,其参数通常被强制正交化以防止任务间干扰。然而,我们认为这种参数约束方法会损害模型的可塑性。为此,我们提出语义引导的动态稀疏化方法,这是一种通过定向稀疏化控制子空间方向与秩来主动引导激活空间的新方法。具体而言,SGDS通过促使相似类别共享紧凑的激活子空间来促进知识迁移,同时通过为不相似类别分配非重叠的激活子空间来防止干扰。通过在激活空间中塑造类别特定的稀疏子空间,SGDS有效缓解了干扰,且无需对参数空间施加刚性约束。在多个基准数据集上的大量实验证明了SGDS具有最先进的性能。