Brain-inspired spiking neuron networks (SNNs) have attracted widespread research interest due to their low power features, high biological plausibility, and strong spatiotemporal information processing capability. Although adopting a surrogate gradient (SG) makes the non-differentiability SNN trainable, achieving comparable accuracy for ANNs and keeping low-power features simultaneously is still tricky. In this paper, we proposed an energy-efficient spike-train level spiking neural network with spatio-temporal conversion, which has low computational cost and high accuracy. In the STCSNN, spatio-temporal conversion blocks (STCBs) are proposed to keep the low power features of SNNs and improve accuracy. However, STCSNN cannot adopt backpropagation algorithms directly due to the non-differentiability nature of spike trains. We proposed a suitable learning rule for STCSNNs by deducing the equivalent gradient of STCB. We evaluate the proposed STCSNN on static and neuromorphic datasets, including Fashion-Mnist, Cifar10, Cifar100, TinyImageNet, and DVS-Cifar10. The experiment results show that our proposed STCSNN outperforms the state-of-the-art accuracy on nearly all datasets, using fewer time steps and being highly energy-efficient.
翻译:受大脑启发的脉冲神经网络(SNNs)因其低功耗特性、高生物合理性以及强大的时空信息处理能力而引起了广泛的研究兴趣。尽管采用替代梯度(SG)使得不可微分的SNN变得可训练,但同时实现与人工神经网络(ANN)相当的精度并保持低功耗特性仍然具有挑战性。本文提出了一种具有时空转换的高能效脉冲序列级脉冲神经网络,该网络具有低计算成本和高精度。在STCSNN中,我们提出了时空转换块(STCBs)以保持SNN的低功耗特性并提高精度。然而,由于脉冲序列的不可微分性,STCSNN无法直接采用反向传播算法。我们通过推导STCB的等效梯度,为STCSNN提出了一种合适的学习规则。我们在静态和神经形态数据集上评估了所提出的STCSNN,包括Fashion-Mnist、Cifar10、Cifar100、TinyImageNet和DVS-Cifar10。实验结果表明,我们提出的STCSNN在几乎所有数据集上都优于最先进的精度,同时使用更少的时间步长且具有高能效。