Spiking Neural Network (SNN), as a brain-inspired and energy-efficient network, is currently facing the pivotal challenge of exploring a suitable and efficient learning framework. The predominant training methodologies, namely Spatial-Temporal Back-propagation (STBP) and ANN-SNN Conversion, are encumbered by substantial training overhead or pronounced inference latency, which impedes the advancement of SNNs in scaling to larger networks and navigating intricate application domains. In this work, we propose a novel parallel conversion learning framework, which establishes a mathematical mapping relationship between each time-step of the parallel spiking neurons and the cumulative spike firing rate. We theoretically validate the lossless and sorting properties of the conversion process, as well as pointing out the optimal shifting distance for each step. Furthermore, by integrating the above framework with the distribution-aware error calibration technique, we can achieve efficient conversion towards more general activation functions or training-free circumstance. Extensive experiments have confirmed the significant performance advantages of our method for various conversion cases under ultra-low time latency. To our best knowledge, this is the first work which jointly utilizes parallel spiking calculation and ANN-SNN Conversion, providing a highly promising approach for SNN supervised training.
翻译:脉冲神经网络(SNN)作为一种受大脑启发的节能网络,目前正面临探索合适且高效学习框架的关键挑战。主流的训练方法,即时空反向传播(STBP)和ANN-SNN转换,分别受限于巨大的训练开销或显著的推理延迟,这阻碍了SNN向更大规模网络扩展和应用于复杂领域的进展。在本工作中,我们提出了一种新颖的并行转换学习框架,该框架在并行脉冲神经元的每个时间步与累积脉冲发放率之间建立了数学映射关系。我们从理论上验证了转换过程的无损性和排序特性,并指出了每一步的最优偏移距离。此外,通过将上述框架与分布感知误差校准技术相结合,我们可以实现对更一般的激活函数或无训练场景的高效转换。大量实验证实了我们的方法在超低时间延迟下,针对各种转换案例均具有显著的性能优势。据我们所知,这是首个联合利用并行脉冲计算与ANN-SNN转换的工作,为SNN的监督训练提供了一种极具前景的途径。