Few-shot class-incremental learning (FSCIL) receives significant attention from the public to perform classification continuously with a few training samples, which suffers from the key catastrophic forgetting problem. Existing methods usually employ an external memory to store previous knowledge and treat it with incremental classes equally, which cannot properly preserve previous essential knowledge. To solve this problem and inspired by recent distillation works on knowledge transfer, we propose a framework termed \textbf{C}onstrained \textbf{D}ataset \textbf{D}istillation (\textbf{CD$^2$}) to facilitate FSCIL, which includes a dataset distillation module (\textbf{DDM}) and a distillation constraint module~(\textbf{DCM}). Specifically, the DDM synthesizes highly condensed samples guided by the classifier, forcing the model to learn compacted essential class-related clues from a few incremental samples. The DCM introduces a designed loss to constrain the previously learned class distribution, which can preserve distilled knowledge more sufficiently. Extensive experiments on three public datasets show the superiority of our method against other state-of-the-art competitors.
翻译:少样本类增量学习(FSCIL)因其能够利用少量训练样本持续进行分类而受到广泛关注,但其面临关键性的灾难性遗忘问题。现有方法通常采用外部记忆存储先前知识,并以同等方式处理增量类,无法妥善保留先前的重要知识。为解决此问题并受近期知识迁移蒸馏工作的启发,我们提出一个名为**约束数据集蒸馏**(**CD$^2$**)的框架以促进FSCIL,该框架包含数据集蒸馏模块(**DDM**)和蒸馏约束模块(**DCM**)。具体而言,DDM在分类器的引导下合成高度浓缩的样本,迫使模型从少量增量样本中学习紧凑的、与类别相关的重要线索。DCM引入一种设计的损失函数来约束先前已学习的类别分布,从而更充分地保留蒸馏知识。在三个公开数据集上的大量实验表明,本方法优于其他最先进的竞争方法。