Deep models, e.g., CNNs and Vision Transformers, have achieved impressive achievements in many vision tasks in the closed world. However, novel classes emerge from time to time in our ever-changing world, requiring a learning system to acquire new knowledge continually. Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally and build a universal classifier among all seen classes. Correspondingly, when directly training the model with new class instances, a fatal problem occurs -- the model tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades. There have been numerous efforts to tackle catastrophic forgetting in the machine learning community. In this paper, we survey comprehensively recent advances in class-incremental learning and summarize these methods from several aspects. We also provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms empirically. Furthermore, we notice that the current comparison protocol ignores the influence of memory budget in model storage, which may result in unfair comparison and biased results. Hence, we advocate fair comparison by aligning the memory budget in evaluation, as well as several memory-agnostic performance measures. The source code is available at https://github.com/zhoudw-zdw/CIL_Survey/
翻译:深度模型(如CNN和视觉Transformer)在封闭世界中的众多视觉任务上已取得令人瞩目的成就。然而,在不断变化的世界中,新类别会持续涌现,这要求学习系统能够持续获取新知识。类别增量学习使学习器能够逐步整合新类别的知识,并在所有已见类别上构建统一分类器。相应地,当直接使用新类别样本训练模型时,会出现一个致命问题——模型极易灾难性遗忘旧类别的特征,导致性能急剧下降。机器学习领域已提出大量克服灾难性遗忘的方法。本文全面综述了类别增量学习的最新进展,并从多个维度系统归纳了现有方法。我们进一步对17种方法在基准图像分类任务上进行了严谨统一的评估,以实证探究不同算法的特性。此外,我们注意到当前比较范式忽略了模型存储中内存预算的影响,可能导致不公平比较与结果偏差。为此,我们倡导通过内存预算对齐进行评估,并提出若干内存无关的性能度量指标,以促进公平比较。源代码已发布于https://github.com/zhoudw-zdw/CIL_Survey/。