The primary objective of methods in continual learning is to learn tasks in a sequential manner over time from a stream of data, while mitigating the detrimental phenomenon of catastrophic forgetting. In this paper, we focus on learning an optimal representation between previous class prototypes and newly encountered ones. We propose a prototypical network with a Bayesian learning-driven contrastive loss (BLCL) tailored specifically for class-incremental learning scenarios. Therefore, we introduce a contrastive loss that incorporates new classes into the latent representation by reducing the intra-class distance and increasing the inter-class distance. Our approach dynamically adapts the balance between the cross-entropy and contrastive loss functions with a Bayesian learning technique. Empirical evaluations conducted on both the CIFAR-10 and CIFAR-100 dataset for image classification and images of a GNSS-based dataset for interference classification validate the efficacy of our method, showcasing its superiority over existing state-of-the-art approaches.
翻译:持续学习方法的主要目标是从数据流中按时间顺序学习任务,同时缓解灾难性遗忘的有害现象。本文聚焦于学习先前类别原型与新遇到原型之间的最优表示。我们提出了一种专门针对类增量学习场景设计的、采用贝叶斯学习驱动对比损失(BLCL)的原型网络。为此,我们引入了一种对比损失,该损失通过减小类内距离并增大类间距离,将新类别融入潜在表示中。我们的方法通过贝叶斯学习技术动态调整交叉熵损失函数与对比损失函数之间的平衡。在CIFAR-10和CIFAR-100数据集上进行的图像分类实验,以及基于GNSS数据集的干扰分类图像实验,均验证了我们方法的有效性,并展示了其优于现有先进方法的性能。