The primary objective of methods in continual learning is to learn tasks in a sequential manner over time (sometimes from a stream of data), while mitigating the detrimental phenomenon of catastrophic forgetting. This paper proposes a method to learn an effective representation between previous and newly encountered class prototypes. We propose a prototypical network with a Bayesian learning-driven contrastive loss (BLCL), tailored specifically for class-incremental learning scenarios. We introduce a contrastive loss that incorporates novel classes into the latent representation by reducing intra-class and increasing inter-class distance. Our approach dynamically adapts the balance between the cross-entropy and contrastive loss functions with a Bayesian learning technique. Experimental results conducted on the CIFAR-10, CIFAR-100, and ImageNet100 datasets for image classification and images of a GNSS-based dataset for interference classification validate the efficacy of our method, showcasing its superiority over existing state-of-the-art approaches. Git: https://gitlab.cc-asp.fraunhofer.de/darcy_gnss/gnss_class_incremental_learning
翻译:持续学习方法的主要目标是在时间上顺序学习任务(有时从数据流中学习),同时减轻灾难性遗忘的有害现象。本文提出了一种方法,用于学习先前遇到和新遇到的类原型之间的有效表示。我们提出了一种采用贝叶斯学习驱动的对比损失(BLCL)的原型网络,专门针对类增量学习场景而设计。我们引入了一种对比损失,通过减小类内距离并增大类间距离,将新类融入潜在表示中。我们的方法利用贝叶斯学习技术动态调整交叉熵损失函数与对比损失函数之间的平衡。在CIFAR-10、CIFAR-100和ImageNet100数据集上进行的图像分类实验,以及基于GNSS数据集的干扰分类图像实验,验证了我们方法的有效性,展示了其优于现有最先进方法的性能。Git:https://gitlab.cc-asp.fraunhofer.de/darcy_gnss/gnss_class_incremental_learning