Non-exemplar class-incremental learning (NECIL) is to resist catastrophic forgetting without saving old class samples. Prior methodologies generally employ simple rules to generate features for replaying, suffering from large distribution gap between replayed features and real ones. To address the aforementioned issue, we propose a simple, yet effective \textbf{Diff}usion-based \textbf{F}eature \textbf{R}eplay (\textbf{DiffFR}) method for NECIL. First, to alleviate the limited representational capacity caused by fixing the feature extractor, we employ Siamese-based self-supervised learning for initial generalizable features. Second, we devise diffusion models to generate class-representative features highly similar to real features, which provides an effective way for exemplar-free knowledge memorization. Third, we introduce prototype calibration to direct the diffusion model's focus towards learning the distribution shapes of features, rather than the entire distribution. Extensive experiments on public datasets demonstrate significant performance gains of our DiffFR, outperforming the state-of-the-art NECIL methods by 3.0\% in average. The code will be made publicly available soon.
翻译:非样本类增量学习(NECIL)旨在不保存旧类别样本的情况下抵抗灾难性遗忘。现有方法通常采用简单规则生成用于回放的特征,但面临回放特征与真实特征间分布差距较大的问题。为解决上述问题,我们提出了一种简单而有效的基于**扩散**的**特征**回放(**DiffFR**)方法用于NECIL。首先,为缓解固定特征提取器导致的表征能力受限问题,我们采用基于孪生网络的自监督学习来获取初始的泛化特征。其次,我们设计扩散模型来生成与真实特征高度相似的类别代表性特征,这为无样本知识记忆提供了有效途径。第三,我们引入原型校准机制,引导扩散模型专注于学习特征的分布形态而非完整分布。在公开数据集上的大量实验表明,我们的DiffFR方法取得了显著的性能提升,平均优于当前最先进的NECIL方法3.0%。代码将很快公开。