Compared to traditional Artificial Neural Network (ANN), Spiking Neural Network (SNN) has garnered widespread academic interest for its intrinsic ability to transmit information in a more biological-inspired and energy-efficient manner. However, despite previous efforts to optimize the learning gradients and model structure of SNNs through various methods, SNNs still lag behind ANNs in terms of performance to some extent. The recently proposed multi-threshold model provides more possibilities for further enhancing the learning capability of SNNs. In this paper, we rigorously analyze the relationship among the multi-threshold model, vanilla spiking model and quantized ANNs from a mathematical perspective, then propose a novel LM-HT model, which is an equidistant multi-hierarchical model that can dynamically regulate the global input current and membrane potential leakage on the time dimension. In addition, we note that the direct training algorithm based on the LM-HT model can seamlessly integrate with the traditional ANN-SNN Conversion framework. This novel hybrid learning framework can effectively improve the relatively poor performance of converted SNNs under low time latency. Extensive experimental results have demonstrated that our LM-HT model can significantly outperform previous state-of-the-art works on various types of datasets, which promote SNNs to achieve a brand-new level of performance comparable to quantized ANNs.
翻译:摘要:相较于传统人工神经网络(ANN),脉冲神经网络(SNN)因其以更具生物启发性和能效的方式传输信息的内在能力而受到学术界广泛关注。然而,尽管先前已有研究通过各种方法优化SNN的学习梯度和模型结构,但SNN在性能上仍一定程度上落后于ANN。近期提出的多阈值模型为进一步增强SNN的学习能力提供了更多可能性。本文从数学角度严谨分析了多阈值模型、经典脉冲模型与量化ANN之间的关系,进而提出了一种新颖的LM-HT模型——一种可沿时间维度动态调节全局输入电流与膜电位泄漏的等距多层级模型。此外,我们注意到基于LM-HT模型的直接训练算法能够与传统ANN-SNN转换框架无缝结合。这种新型混合学习框架可有效提升低时延下转换SNN的较差性能。大量实验结果表明,我们的LM-HT模型在多种类型数据集上显著优于先前最先进的工作,推动SNN达到了与量化ANN相媲美的全新性能水平。