Online Class Incremental Learning (OCIL) aims to train models incrementally, where data arrive in mini-batches, and previous data are not accessible. A major challenge in OCIL is Catastrophic Forgetting, i.e., the loss of previously learned knowledge. Among existing baselines, replay-based methods show competitive results but requires extra memory for storing exemplars, while exemplar-free (i.e., data need not be stored for replay in production) methods are resource-friendly but often lack accuracy. In this paper, we propose an exemplar-free approach--Forward-only Online Analytic Learning (F-OAL). Unlike traditional methods, F-OAL does not rely on back-propagation and is forward-only, significantly reducing memory usage and computational time. Cooperating with a pre-trained frozen encoder with Feature Fusion, F-OAL only needs to update a linear classifier by recursive least square. This approach simultaneously achieves high accuracy and low resource consumption. Extensive experiments on benchmark datasets demonstrate F-OAL's robust performance in OCIL scenarios. Code is available at https://github.com/liuyuchen-cz/F-OAL.
翻译:在线类增量学习(OCIL)旨在以增量方式训练模型,其中数据以小批量形式到达,且先前数据不可访问。OCIL中的一个主要挑战是灾难性遗忘,即先前学习知识的丢失。在现有基线方法中,基于回放的方法显示出有竞争力的结果,但需要额外内存存储样本;而无样本(即在生产环境中无需存储数据用于回放)方法资源友好,但通常缺乏准确性。本文提出一种无样本方法——仅前向在线解析学习(F-OAL)。与传统方法不同,F-OAL不依赖反向传播且仅需前向计算,显著降低了内存使用和计算时间。通过与采用特征融合的预训练冻结编码器协同工作,F-OAL仅需通过递归最小二乘法更新线性分类器。该方法同时实现了高精度和低资源消耗。在基准数据集上的大量实验证明了F-OAL在OCIL场景中的鲁棒性能。代码发布于 https://github.com/liuyuchen-cz/F-OAL。