Spiking Neural Networks (SNNs) are inherently suited for continuous learning due to their event-driven temporal dynamics; however, their application to Class-Incremental Learning (CIL) has been hindered by catastrophic forgetting and the temporal misalignment of spike patterns. In this work, we introduce Spiking Temporal Alignment with Experience Replay (STAER), a novel framework that explicitly preserves temporal structure to bridge the performance gap between SNNs and ANNs. Our approach integrates a differentiable Soft-DTW alignment loss to maintain spike timing fidelity and employs a temporal expansion and contraction mechanism on output logits to enforce robust representation learning. Implemented on a deep ResNet19 spiking backbone, STAER achieves state-of-the-art performance on Sequential-MNIST and Sequential-CIFAR10. Empirical results demonstrate that our method matches or outperforms strong ANN baselines (ER, DER++) while preserving biologically plausible dynamics. Ablation studies further confirm that explicit temporal alignment is critical for representational stability, positioning STAER as a scalable solution for spike-native lifelong learning. Code is available at https://github.com/matteogianferrari/staer.
翻译:脉冲神经网络(SNNs)因其事件驱动的时序动态特性,天生适用于持续学习;然而,其在类增量学习(CIL)中的应用一直受到灾难性遗忘和脉冲模式时序错位问题的阻碍。本文提出一种新颖的框架——基于经验回放的脉冲时序对齐方法(STAER),该方法通过显式保持时序结构来弥合SNNs与人工神经网络(ANNs)之间的性能差距。我们的方法集成了可微分的Soft-DTW对齐损失以维持脉冲时序保真度,并在输出逻辑值上使用时序扩展与收缩机制以增强鲁棒的表征学习。在深度ResNet19脉冲骨干网络上实现后,STAER在Sequential-MNIST和Sequential-CIFAR10数据集上取得了最先进的性能。实验结果表明,我们的方法在保持生物合理动态特性的同时,达到或超越了强ANN基线方法(ER、DER++)的性能。消融研究进一步证实,显式的时序对齐对于表征稳定性至关重要,这使STAER成为一种可扩展的、面向脉冲原生终身学习的解决方案。代码发布于https://github.com/matteogianferrari/staer。