The bio-inspired integrate-fire-reset mechanism of spiking neurons constitutes the foundation for efficient processing in Spiking Neural Networks (SNNs). Recent progress in large models demands that spiking neurons support highly parallel computation to scale efficiently on modern GPUs. This work proposes a novel functional perspective that provides general guidance for designing parallel spiking neurons. We argue that the reset mechanism, which induces complex temporal dependencies and hinders parallel training, should be removed. However, any such modification should satisfy two principles: 1) preserving the functions of reset as a core biological mechanism; and 2) enabling parallel training without sacrificing the serial inference ability of spiking neurons, which underpins their efficiency at test time. To this end, we identify the functions of the reset and analyze how to reconcile parallel training with serial inference, upon which we propose a dynamic decay spiking neuron. We conduct comprehensive testing of our method in terms of: 1) Training efficiency and extrapolation capability. On 16k-length sequences, we achieve a 25.6x training speedup over the pioneering parallel spiking neuron, and our models trained on 2k-length can stably perform inference on sequences as long as 30k. 2) Generality. We demonstrate the consistent effectiveness of the proposed method across five task categories (image classification, neuromorphic event processing, time-series forecasting, language modeling, and reinforcement learning), three network architectures (spiking CNN/Transformer/SSMs), and two spike activation modes (spike/integer activation). 3) Energy consumption. The spiking firing of our neuron is lower than that of vanilla and existing parallel spiking neurons.
翻译:脉冲神经元受生物启发的“积分-发放-重置”机制构成了脉冲神经网络高效处理的基础。大规模模型的最新进展要求脉冲神经元支持高度并行计算,以便在现代GPU上高效扩展。本文提出了一种新颖的功能视角,为设计并行脉冲神经元提供了通用指导。我们认为,重置机制会引入复杂的时间依赖性并阻碍并行训练,应当被移除。然而,任何此类修改都应满足两个原则:1) 保留重置作为核心生物机制的功能;2) 在不牺牲脉冲神经元串行推理能力的前提下实现并行训练,这种串行推理能力是其测试时效率的基础。为此,我们识别了重置的功能,并分析了如何协调并行训练与串行推理,在此基础上提出了一种动态衰减脉冲神经元。我们从以下方面对我们的方法进行了全面测试:1) 训练效率和外推能力。在16k长度的序列上,我们相比开创性的并行脉冲神经元实现了25.6倍的训练加速,并且在2k长度上训练的模型能够在长达30k的序列上稳定执行推理。2) 通用性。我们在五个任务类别(图像分类、神经形态事件处理、时间序列预测、语言建模和强化学习)、三种网络架构(脉冲CNN/Transformer/SSMs)和两种脉冲激活模式(脉冲/整数激活)上证明了所提方法的一致有效性。3) 能耗。我们神经元的脉冲发放率低于原始及现有并行脉冲神经元。