Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models. Compared to conventional deep Artificial Neural Networks (ANNs), SNNs exhibit superior efficiency and capability to process temporal information. However, it remains a challenge to train SNNs due to their undifferentiable spiking mechanism. The surrogate gradients method is commonly used to train SNNs, but often comes with an accuracy disadvantage over ANNs counterpart. We link the degraded accuracy to the vanishing of gradient on the temporal dimension through the analytical and experimental study of the training process of Leaky Integrate-and-Fire (LIF) Neuron-based SNNs. Moreover, we propose the Complementary Leaky Integrate-and-Fire (CLIF) Neuron. CLIF creates extra paths to facilitate the backpropagation in computing temporal gradient while keeping binary output. CLIF is hyperparameter-free and features broad applicability. Extensive experiments on a variety of datasets demonstrate CLIF's clear performance advantage over other neuron models. Furthermore, the CLIF's performance even slightly surpasses superior ANNs with identical network structure and training conditions. The code is available at https://github.com/HuuYuLong/Complementary-LIF.
翻译:脉冲神经网络(SNNs)是一种极具前景的、受大脑启发的节能模型。与传统的深度人工神经网络(ANNs)相比,SNNs展现出更高的效率和处理时序信息的能力。然而,由于其不可微的脉冲发放机制,训练SNNs仍然是一个挑战。替代梯度法通常用于训练SNNs,但其精度往往不及对应的ANNs。通过对基于漏积分发放(LIF)神经元的SNNs训练过程进行理论分析和实验研究,我们将精度下降归因于时序维度上的梯度消失。此外,我们提出了互补漏积分发放(CLIF)神经元。CLIF通过创建额外的路径来促进时序梯度计算中的反向传播,同时保持二值输出。CLIF无需超参数调整,并具有广泛的适用性。在多种数据集上进行的大量实验表明,CLIF相较于其他神经元模型具有明显的性能优势。此外,在相同的网络结构和训练条件下,CLIF的性能甚至略微超越了性能优异的ANNs。代码可在 https://github.com/HuuYuLong/Complementary-LIF 获取。