Drawing inspiration from human learning behaviors, this work proposes a novel approach to mitigate catastrophic forgetting in Prompt-based Continual Learning models by exploiting the relationships between continuously emerging class data. We find that applying human habits of organizing and connecting information can serve as an efficient strategy when training deep learning models. Specifically, by building a hierarchical tree structure based on the expanding set of labels, we gain fresh insights into the data, identifying groups of similar classes could easily cause confusion. Additionally, we delve deeper into the hidden connections between classes by exploring the original pretrained model's behavior through an optimal transport-based approach. From these insights, we propose a novel regularization loss function that encourages models to focus more on challenging knowledge areas, thereby enhancing overall performance. Experimentally, our method demonstrated significant superiority over the most robust state-of-the-art models on various benchmarks.
翻译:受人类学习行为的启发,本研究提出一种新颖方法,通过利用持续涌现的类别数据间的关系来缓解基于提示的持续学习模型中的灾难性遗忘问题。我们发现,在训练深度学习模型时,应用人类组织和关联信息的习惯可成为高效策略。具体而言,通过基于不断扩展的标签集构建层次化树状结构,我们获得了对数据的新认知,识别出容易引发混淆的相似类别组。此外,我们通过基于最优传输的方法探索原始预训练模型的行为,从而更深入地挖掘类别间的隐含关联。基于这些发现,我们提出了一种新颖的正则化损失函数,促使模型更关注具有挑战性的知识领域,从而提升整体性能。实验表明,我们的方法在多个基准测试中均显著优于当前最先进的鲁棒模型。