Spiking Neural Networks (SNNs) have attracted considerable attention due to their biologically inspired, event-driven nature, making them highly suitable for neuromorphic hardware. Time-to-First-Spike (TTFS) coding, where neurons fire only once during inference, offers the benefits of reduced spike counts, enhanced energy efficiency, and faster processing. However, SNNs employing TTFS coding often suffer from diminished classification accuracy. This paper presents an efficient training framework for TTFS that not only improves accuracy but also accelerates the training process. Unlike most previous approaches, we first identify two key issues limiting the performance of TTFS neurons: information disminishing and imbalanced membrane potential distribution. To address these challenges, we propose a novel initialization strategy. Additionally, we introduce a temporal weighting decoding method that aggregates temporal outputs through a weighted sum, supporting BPTT. Moreover, we re-evaluate the pooling layer in TTFS neurons and find that average pooling is better suited than max-pooling for this coding scheme. Our experimental results show that the proposed training framework leads to more stable training and significant performance improvements, achieving state-of-the-art (SOTA) results on both the MNIST and Fashion-MNIST datasets.
翻译:脉冲神经网络因其受生物学启发的、事件驱动的特性而备受关注,使其非常适合神经形态硬件。首次脉冲时间编码中,神经元在推理过程中仅发放一次脉冲,具有减少脉冲数量、提高能量效率和加快处理速度的优势。然而,采用TTFS编码的SNN通常存在分类精度下降的问题。本文提出了一种高效的TTFS训练框架,该框架不仅提高了精度,还加速了训练过程。与以往大多数方法不同,我们首先识别了限制TTFS神经元性能的两个关键问题:信息衰减和膜电位分布不平衡。为了应对这些挑战,我们提出了一种新颖的初始化策略。此外,我们引入了一种时间加权解码方法,通过加权和聚合时间输出,支持BPTT。同时,我们重新评估了TTFS神经元中的池化层,发现对于此编码方案,平均池化比最大池化更适用。我们的实验结果表明,所提出的训练框架能带来更稳定的训练和显著的性能提升,在MNIST和Fashion-MNIST数据集上均取得了最先进的结果。