Neuromorphic vision systems based on spiking neural networks (SNNs) offer ultra-low-power perception for event-based and frame-based cameras, yet catastrophic forgetting remains a critical barrier to deployment in continually evolving environments. Existing continual learning methods, developed primarily for artificial neural networks, seldom jointly optimize accuracy and energy efficiency, with particularly limited exploration on event-based datasets. We propose an energy-aware spike budgeting framework for continual SNN learning that integrates experience replay, learnable leaky integrate-and-fire neuron parameters, and an adaptive spike scheduler to enforce dataset-specific energy constraints during training. Our approach exhibits modality-dependent behavior: on frame-based datasets (MNIST, CIFAR-10), spike budgeting acts as a sparsity-inducing regularizer, improving accuracy while reducing spike rates by up to 47\%; on event-based datasets (DVS-Gesture, N-MNIST, CIFAR-10-DVS), controlled budget relaxation enables accuracy gains up to 17.45 percentage points with minimal computational overhead. Across five benchmarks spanning both modalities, our method demonstrates consistent performance improvements while minimizing dynamic power consumption, advancing the practical viability of continual learning in neuromorphic vision systems.
翻译:基于脉冲神经网络(SNNs)的神经形态视觉系统为事件相机和帧相机提供了超低功耗的感知能力,然而灾难性遗忘仍是其在持续变化环境中部署的关键障碍。现有的持续学习方法主要针对人工神经网络开发,很少同时优化精度与能量效率,尤其是在事件数据集上的探索极为有限。我们提出了一种用于持续SNN学习的能量感知脉冲预算框架,该框架集成了经验回放、可学习的泄漏积分发放神经元参数以及自适应脉冲调度器,以在训练期间强制执行数据集特定的能量约束。我们的方法表现出模态依赖的行为:在基于帧的数据集(MNIST、CIFAR-10)上,脉冲预算作为一种稀疏性诱导正则化器,在将脉冲率降低高达47%的同时提高了精度;在基于事件的数据集(DVS-Gesture、N-MNIST、CIFAR-10-DVS)上,受控的预算松弛能以最小的计算开销实现高达17.45个百分点的精度提升。在涵盖两种模态的五个基准测试中,我们的方法在最小化动态功耗的同时,展现出一致的性能提升,从而推进了神经形态视觉系统中持续学习的实际可行性。