In autonomous driving, even a meticulously trained model can encounter failures when facing unfamiliar scenarios. One of these scenarios can be formulated as an online continual learning (OCL) problem. That is, data come in an online fashion, and models are updated according to these streaming data. Two major OCL challenges are catastrophic forgetting and data imbalance. To address these challenges, in this paper, we propose an Analytic Exemplar-Free Online Continual Learning algorithm (AEF-OCL). The AEF-OCL leverages analytic continual learning principles and employs ridge regression as a classifier for features extracted by a large backbone network. It solves the OCL problem by recursively calculating the analytical solution, ensuring an equalization between the continual learning and its joint-learning counterpart, and works without the need to save any used samples (i.e., exemplar-free). Additionally, we introduce a Pseudo-Features Generator (PFG) module that recursively estimates the mean and the variance of real features for each class. It over-samples offset pseudo-features from the same normal distribution as the real features, thereby addressing the data imbalance issue. Experimental results demonstrate that despite being an exemplar-free strategy, our method outperforms various methods on the autonomous driving SODA10M dataset. Source code is available at https://github.com/ZHUANGHP/Analytic-continual-learning.
翻译:在自动驾驶领域,即使经过精心训练的模型,在面对陌生场景时也可能出现故障。这类场景可被表述为一个在线持续学习问题。具体而言,数据以在线方式持续到达,模型根据这些流式数据进行更新。在线持续学习面临两大挑战:灾难性遗忘与数据不平衡。为应对这些挑战,本文提出一种解析式无范例在线持续学习算法。该算法基于解析持续学习原理,利用岭回归作为大型骨干网络所提取特征的分类器。它通过递归计算解析解来解决在线持续学习问题,确保了持续学习与其联合学习对应形式之间的均衡性,且无需保存任何已使用的样本。此外,我们引入了一个伪特征生成器模块,该模块递归估计每个类别真实特征的均值与方差,并从与真实特征相同的正态分布中过采样偏移伪特征,从而解决数据不平衡问题。实验结果表明,尽管采用无范例策略,我们的方法在自动驾驶SODA10M数据集上仍优于多种现有方法。源代码发布于 https://github.com/ZHUANGHP/Analytic-continual-learning。