The Spiking Neural Network (SNN), due to its unique spiking-driven nature, is a more energy-efficient and effective neural network compared to Artificial Neural Networks (ANNs). The encoding method directly influences the overall performance of the network, and currently, direct encoding is primarily used for directly trained SNNs. When working with static image datasets, direct encoding inputs the same feature map at every time step, failing to fully exploit the spatiotemporal properties of SNNs. While temporal encoding converts input data into spike trains with spatiotemporal characteristics, traditional SNNs utilize the same neurons when processing input data across different time steps, limiting their ability to integrate and utilize spatiotemporal information effectively.To address this, this paper employs temporal encoding and proposes the Adaptive Spiking Neural Network (ASNN), enhancing the utilization of temporal encoding in conventional SNNs. Additionally, temporal encoding is less frequently used because short time steps can lead to significant loss of input data information, often necessitating a higher number of time steps in practical applications. However, training large SNNs with long time steps is challenging due to hardware constraints. To overcome this, this paper introduces a hybrid encoding approach that not only reduces the required time steps for training but also continues to improve the overall network performance.Notably, significant improvements in classification performance are observed on both Spikformer and Spiking ResNet architectures.our code is available at https://github.com/hhx0320/ASNN
翻译:脉冲神经网络(SNN)因其独特的脉冲驱动特性,相较于人工神经网络(ANN)具有更高的能效和有效性。编码方法直接影响网络的整体性能,目前直接编码主要用于直接训练的SNN。在处理静态图像数据集时,直接编码在每个时间步输入相同的特征图,未能充分利用SNN的时空特性。而时间编码将输入数据转换为具有时空特征的脉冲序列,但传统SNN在处理不同时间步的输入数据时使用相同的神经元,限制了其有效整合和利用时空信息的能力。为解决这一问题,本文采用时间编码并提出了自适应脉冲神经网络(ASNN),增强了传统SNN中时间编码的利用效率。此外,时间编码较少被采用是因为短时间步可能导致输入数据信息的大量丢失,实际应用中通常需要较多时间步。然而,由于硬件限制,使用长时间步训练大型SNN具有挑战性。为克服此困难,本文引入了一种混合编码方法,该方法不仅减少了训练所需的时间步,还持续提升了网络的整体性能。值得注意的是,在Spikformer和Spiking ResNet架构上均观察到分类性能的显著提升。我们的代码可在https://github.com/hhx0320/ASNN获取。