While biological intelligence grows organically as new knowledge is gathered throughout life, Artificial Neural Networks forget catastrophically whenever they face a changing training data distribution. Rehearsal-based Continual Learning (CL) approaches have been established as a versatile and reliable solution to overcome this limitation; however, sudden input disruptions and memory constraints are known to alter the consistency of their predictions. We study this phenomenon by investigating the geometric characteristics of the learner's latent space and find that replayed data points of different classes increasingly mix up, interfering with classification. Hence, we propose a geometric regularizer that enforces weak requirements on the Laplacian spectrum of the latent space, promoting a partitioning behavior. Our proposal, called Continual Spectral Regularizer for Incremental Learning (CaSpeR-IL), can be easily combined with any rehearsal-based CL approach and improves the performance of SOTA methods on standard benchmarks.
翻译:虽然生物智能随着生命过程中新知识的积累而有机增长,但人工神经网络在面对变化的训练数据分布时会发生灾难性遗忘。基于回放的持续学习(CL)方法已被确立为克服这一局限性的通用可靠解决方案;然而,突发输入干扰和内存限制会改变其预测的一致性。我们通过研究学习者潜在空间的几何特性来探究这一现象,发现不同类别的回放数据点逐渐混合,干扰了分类。因此,我们提出了一种几何正则化器,它对潜在空间的拉普拉斯谱施加弱约束,促进分区行为。我们的方法称为增量学习的持续谱正则化器(CaSpeR-IL),可以轻松与任何基于回放的CL方法结合,并在标准基准测试中提升了最先进方法的性能。