Few-Shot Class-Incremental Learning (FSCIL) introduces a paradigm in which the problem space expands with limited data. FSCIL methods inherently face the challenge of catastrophic forgetting as data arrives incrementally, making models susceptible to overwriting previously acquired knowledge. Moreover, given the scarcity of labeled samples available at any given time, models may be prone to overfitting and find it challenging to strike a balance between extensive pretraining and the limited incremental data. To address these challenges, we propose the OrCo framework built on two core principles: features' orthogonality in the representation space, and contrastive learning. In particular, we improve the generalization of the embedding space by employing a combination of supervised and self-supervised contrastive losses during the pretraining phase. Additionally, we introduce OrCo loss to address challenges arising from data limitations during incremental sessions. Through feature space perturbations and orthogonality between classes, the OrCo loss maximizes margins and reserves space for the following incremental data. This, in turn, ensures the accommodation of incoming classes in the feature space without compromising previously acquired knowledge. Our experimental results showcase state-of-the-art performance across three benchmark datasets, including mini-ImageNet, CIFAR100, and CUB datasets. Code is available at https://github.com/noorahmedds/OrCo
翻译:摘要:少样本类增量学习(FSCIL)提出了一种在有限数据下扩展问题空间的范式。随着数据的逐步增加,FSCIL方法本质上面临灾难性遗忘的挑战,导致模型容易覆盖先前习得的知识。此外,由于任何时刻可用的标注样本稀缺,模型可能倾向于过拟合,并难以在广泛预训练与有限的增量数据之间取得平衡。为解决这些问题,我们提出了基于两个核心原则的OrCo框架:表示空间中的特征正交性与对比学习。具体而言,我们在预训练阶段通过结合监督和自监督对比损失来提升嵌入空间的泛化能力。此外,我们引入OrCo损失以应对增量会话中数据限制带来的挑战。通过特征空间扰动与类间正交性,OrCo损失最大化类别间隔并为后续增量数据预留空间,从而在不损害已习得知识的前提下容纳新类别。实验结果表明,在mini-ImageNet、CIFAR100和CUB三个基准数据集上,我们的方法达到了最先进的性能。代码已开源:https://github.com/noorahmedds/OrCo