Continual learning (CL) learns a sequence of tasks incrementally. This paper studies the challenging CL setting of class-incremental learning (CIL). CIL has two key challenges: catastrophic forgetting (CF) and inter-task class separation (ICS). Despite numerous proposed methods, these issues remain persistent obstacles. This paper proposes a novel CIL method, called Kernel Linear Discriminant Analysis (KLDA), that can effectively avoid CF and ICS problems. It leverages only the powerful features learned in a foundation model (FM). However, directly using these features proves suboptimal. To address this, KLDA incorporates the Radial Basis Function (RBF) kernel and its Random Fourier Features (RFF) to enhance the feature representations from the FM, leading to improved performance. When a new task arrives, KLDA computes only the mean for each class in the task and updates a shared covariance matrix for all learned classes based on the kernelized features. Classification is performed using Linear Discriminant Analysis. Our empirical evaluation using text and image classification datasets demonstrates that KLDA significantly outperforms baselines. Remarkably, without relying on replay data, KLDA achieves accuracy comparable to joint training of all classes, which is considered the upper bound for CIL performance. The KLDA code is available at https://github.com/salehmomeni/klda.
翻译:持续学习(Continual Learning, CL)旨在以增量方式学习一系列任务。本文研究类别增量学习(Class-Incremental Learning, CIL)这一具有挑战性的CL设定。CIL面临两个关键挑战:灾难性遗忘(Catastrophic Forgetting, CF)与任务间类别分离(Inter-task Class Separation, ICS)。尽管已有众多方法被提出,这些问题仍是持续存在的障碍。本文提出一种新颖的CIL方法,称为核线性判别分析(Kernel Linear Discriminant Analysis, KLDA),该方法能有效避免CF与ICS问题。它仅利用在基础模型(Foundation Model, FM)中学习到的强大特征。然而,直接使用这些特征被证明是次优的。为解决此问题,KLDA引入了径向基函数(Radial Basis Function, RBF)核及其随机傅里叶特征(Random Fourier Features, RFF),以增强来自基础模型的特征表示,从而提升性能。当新任务到达时,KLDA仅计算该任务中每个类别的均值,并基于核化特征更新所有已学习类别的共享协方差矩阵。分类过程使用线性判别分析(Linear Discriminant Analysis)执行。我们在文本与图像分类数据集上的实证评估表明,KLDA显著优于基线方法。值得注意的是,在不依赖回放数据的情况下,KLDA达到了与所有类别联合训练相当的准确率,而联合训练被视为CIL性能的上界。KLDA代码发布于 https://github.com/salehmomeni/klda。