Edge computing scenarios necessitate the development of hardware-efficient online continual learning algorithms to be adaptive to dynamic environment. However, existing algorithms always suffer from high memory overhead and bias towards recently trained tasks. To tackle these issues, this paper proposes a novel online continual learning approach termed as SESLR, which incorporates a sleep enhanced latent replay scheme with spiking neural networks (SNNs). SESLR leverages SNNs' binary spike characteristics to store replay features in single bits, significantly reducing memory overhead. Furthermore, inspired by biological sleep-wake cycles, SESLR introduces a noise-enhanced sleep phase where the model exclusively trains on replay samples with controlled noise injection, effectively mitigating classification bias towards new classes. Extensive experiments on both conventional (MNIST, CIFAR10) and neuromorphic (NMNIST, CIFAR10-DVS) datasets demonstrate SESLR's effectiveness. On Split CIFAR10, SESLR achieves nearly 30% improvement in average accuracy with only one-third of the memory consumption compared to baseline methods. On Split CIFAR10-DVS, it improves accuracy by approximately 10% while reducing memory overhead by a factor of 32. These results validate SESLR as a promising solution for online continual learning in resource-constrained edge computing scenarios.
翻译:边缘计算场景需要开发硬件高效的在线持续学习算法以适应动态环境。然而,现有算法始终面临高内存开销和偏向近期训练任务的问题。为解决这些问题,本文提出了一种新颖的在线持续学习方法,称为SESLR,它将睡眠增强的潜在重放方案与脉冲神经网络(SNNs)相结合。SESLR利用SNNs的二进制脉冲特性,以单比特存储重放特征,显著降低了内存开销。此外,受生物睡眠-觉醒周期启发,SESLR引入了一个噪声增强的睡眠阶段,在该阶段模型专门在受控噪声注入的重放样本上进行训练,有效缓解了对新类别的分类偏差。在传统数据集(MNIST、CIFAR10)和神经形态数据集(NMNIST、CIFAR10-DVS)上进行的大量实验证明了SESLR的有效性。在Split CIFAR10上,与基线方法相比,SESLR以仅三分之一的内存消耗实现了近30%的平均准确率提升。在Split CIFAR10-DVS上,它在将内存开销降低32倍的同时,准确率提高了约10%。这些结果验证了SESLR是资源受限边缘计算场景中在线持续学习的一个有前景的解决方案。