Continual learning (CL) empowers AI systems to progressively acquire knowledge from non-stationary data streams. However, catastrophic forgetting remains a critical challenge. In this work, we identify attention drift in Vision Transformers as a primary source of catastrophic forgetting, where the attention to previously learned visual concepts shifts significantly after learning new tasks. Inspired by neuroscientific insights into the selective attention in the human visual system, we propose a novel attention-retaining framework to mitigate forgetting in CL. Our method constrains attention drift by explicitly modifying gradients during backpropagation through a two-step process: 1) extracting attention maps of the previous task using a layer-wise rollout mechanism and generating instance-adaptive binary masks, and 2) when learning a new task, applying these masks to zero out gradients associated with previous attention regions, thereby preventing disruption of learned visual concepts. For compatibility with modern optimizers, the gradient masking process is further enhanced by scaling parameter updates proportionally to maintain their relative magnitudes. Experiments and visualizations demonstrate the effectiveness of our method in mitigating catastrophic forgetting and preserving visual concepts. It achieves state-of-the-art performance and exhibits robust generalizability across diverse CL scenarios.
翻译:持续学习(CL)赋予人工智能系统从非平稳数据流中逐步获取知识的能力。然而,灾难性遗忘仍然是一个关键挑战。在本工作中,我们指出视觉Transformer中的注意力漂移是灾难性遗忘的主要来源,即对先前学习到的视觉概念的注意力在学习新任务后发生显著偏移。受人类视觉系统中选择性注意力的神经科学见解启发,我们提出了一种新颖的注意力保持框架来缓解持续学习中的遗忘问题。我们的方法通过在反向传播过程中显式修改梯度来约束注意力漂移,该过程包含两个步骤:1)使用分层展开机制提取先前任务的注意力图并生成实例自适应的二值掩码;2)在学习新任务时,应用这些掩码将先前注意力区域相关的梯度置零,从而防止已学习视觉概念遭到破坏。为兼容现代优化器,梯度掩码过程通过按比例缩放参数更新以保持其相对量级而得到进一步优化。实验与可视化结果表明,我们的方法在缓解灾难性遗忘和保持视觉概念方面具有显著效果。该方法实现了最先进的性能,并在多种持续学习场景中展现出强大的泛化能力。