Few-shot class-incremental learning (FSCIL) aims to incrementally recognize new classes using a few samples while maintaining the performance on previously learned classes. One of the effective methods to solve this challenge is to construct prototypical evolution classifiers. Despite the advancement achieved by most existing methods, the classifier weights are simply initialized using mean features. Because representations for new classes are weak and biased, we argue such a strategy is suboptimal. In this paper, we tackle this issue from two aspects. Firstly, thanks to the development of foundation models, we employ a foundation model, the CLIP, as the network pedestal to provide a general representation for each class. Secondly, to generate a more reliable and comprehensive instance representation, we propose a Knowledge Adapter (KA) module that summarizes the data-specific knowledge from training data and fuses it into the general representation. Additionally, to tune the knowledge learned from the base classes to the upcoming classes, we propose a mechanism of Incremental Pseudo Episode Learning (IPEL) by simulating the actual FSCIL. Taken together, our proposed method, dubbed as Knowledge Adaptation Network (KANet), achieves competitive performance on a wide range of datasets, including CIFAR100, CUB200, and ImageNet-R.
翻译:少样本类增量学习(FSCIL)旨在利用少量样本增量识别新类别,同时保持对已学习类别的性能。解决这一挑战的有效方法之一是构建原型演化分类器。尽管现有方法已取得进展,但分类器权重通常仅使用平均特征进行初始化。由于新类别的表征较弱且存在偏差,我们认为这种策略并非最优。本文从两个方面解决该问题。首先,得益于基础模型的发展,我们采用CLIP作为网络基础,为每个类别提供通用表征。其次,为生成更可靠且全面的实例表征,我们提出知识适配器(KA)模块,该模块从训练数据中提取特定数据知识并将其融合到通用表征中。此外,为将基类知识适配到后续新类别,我们通过模拟实际FSCIL场景提出增量伪片段学习(IPEL)机制。综合以上方法,我们提出的知识适应网络(KANet)在CIFAR100、CUB200和ImageNet-R等多个数据集上取得了具有竞争力的性能。