Rehearsal-based Continual Learning (CL) has been intensely investigated in Deep Neural Networks (DNNs). However, its application in Spiking Neural Networks (SNNs) has not been explored in depth. In this paper we introduce the first memory-efficient implementation of Latent Replay (LR)-based CL for SNNs, designed to seamlessly integrate with resource-constrained devices. LRs combine new samples with latent representations of previously learned data, to mitigate forgetting. Experiments on the Heidelberg SHD dataset with Sample and Class-Incremental tasks reach a Top-1 accuracy of 92.5% and 92%, respectively, without forgetting the previously learned information. Furthermore, we minimize the LRs' requirements by applying a time-domain compression, reducing by two orders of magnitude their memory requirement, with respect to a naive rehearsal setup, with a maximum accuracy drop of 4%. On a Multi-Class-Incremental task, our SNN learns 10 new classes from an initial set of 10, reaching a Top-1 accuracy of 78.4% on the full test set.
翻译:基于回放的持续学习在深度神经网络中已得到深入研究,然而其在脉冲神经网络中的应用尚未深入探索。本文首次为脉冲神经网络实现了基于潜在回放的内存高效持续学习方法,该方法专为资源受限设备设计。潜在回放将新样本与已学习数据的潜在表示相结合,以缓解遗忘问题。在海德堡SHD数据集上进行的样本增量与类别增量任务实验中,该方法分别取得了92.5%和92%的Top-1准确率,且未遗忘先前学习的信息。此外,我们通过时域压缩技术将潜在回放的内存需求降低了两个数量级(相较于原始回放设置),最大准确率损失仅为4%。在多类别增量任务中,我们的脉冲神经网络从初始10个类别开始持续学习10个新类别,最终在完整测试集上达到78.4%的Top-1准确率。