Spiking Neural Networks (SNNs) seek to mimic the spiking behavior of biological neurons and are expected to play a key role in the advancement of neural computing and artificial intelligence. The conversion of Artificial Neural Networks (ANNs) to SNNs is the most widely used training method, which ensures that the resulting SNNs perform comparably to ANNs on large-scale datasets. The efficiency of these conversion-based SNNs is often determined by the neural coding schemes. Current schemes typically use spike count or timing for encoding, which is linearly related to ANN activations and increases the required number of time steps. To address this limitation, we propose a novel Canonic Signed Spike (CSS) coding scheme. This method incorporates non-linearity into the encoding process by weighting spikes at each step of neural computation, thereby increasing the information encoded in spikes. We identify the temporal coupling phenomenon arising from weighted spikes and introduce negative spikes along with a Ternary Self-Amplifying (TSA) neuron model to mitigate the issue. A one-step silent period is implemented during neural computation, achieving high accuracy with low latency. We apply the proposed methods to directly convert full-precision ANNs and evaluate performance on CIFAR-10 and ImageNet datasets. Our experimental results demonstrate that the CSS coding scheme effectively compresses time steps for coding and reduces inference latency with minimal conversion loss.
翻译:脉冲神经网络(SNNs)旨在模拟生物神经元的脉冲发放行为,有望在神经计算与人工智能的发展中发挥关键作用。将人工神经网络(ANNs)转换为SNNs是目前最广泛采用的训练方法,该方法能确保所得SNN在大型数据集上达到与ANN相当的性能。这类基于转换的SNN的效率通常由神经编码方案决定。现有方案多采用脉冲计数或发放时序进行编码,其与ANN激活值呈线性关系,往往需要增加时间步数。为克服这一局限,本文提出一种新颖的规范符号脉冲(CSS)编码方案。该方法通过在神经计算的每个步骤中对脉冲进行加权,将非线性引入编码过程,从而提升脉冲承载的信息量。我们发现了加权脉冲引发的时间耦合现象,并引入负脉冲与三元自放大(TSA)神经元模型以缓解该问题。在神经计算中采用单步静默期机制,实现了低延迟下的高精度识别。我们将所提方法应用于全精度ANN的直接转换,并在CIFAR-10和ImageNet数据集上进行性能评估。实验结果表明,CSS编码方案能有效压缩编码所需时间步长,以极小的转换损失显著降低推理延迟。