In this article, we propose a new paradigm for training spiking neural networks (SNNs), spike accumulation forwarding (SAF). It is known that SNNs are energy-efficient but difficult to train. Consequently, many researchers have proposed various methods to solve this problem, among which online training through time (OTTT) is a method that allows inferring at each time step while suppressing the memory cost. However, to compute efficiently on GPUs, OTTT requires operations with spike trains and weighted summation of spike trains during forwarding. In addition, OTTT has shown a relationship with the Spike Representation, an alternative training method, though theoretical agreement with Spike Representation has yet to be proven. Our proposed method can solve these problems; namely, SAF can halve the number of operations during the forward process, and it can be theoretically proven that SAF is consistent with the Spike Representation and OTTT, respectively. Furthermore, we confirmed the above contents through experiments and showed that it is possible to reduce memory and training time while maintaining accuracy.
翻译:本文提出了一种用于训练脉冲神经网络(SNN)的新范式——脉冲累积前向传播(SAF)。众所周知,SNN具有高能效但训练困难。因此,许多研究者提出了各种方法来解决这一问题,其中时间在线训练(OTTT)是一种允许在每个时间步进行推理同时抑制存储开销的方法。然而,为了在GPU上高效计算,OTTT在前向传播过程中需要对脉冲序列执行操作并对脉冲序列进行加权求和。此外,OTTT与另一种替代训练方法——脉冲表示(Spike Representation)存在关联,但尚未证明两者在理论上的一致性。本文提出的方法能够解决这些问题:SAF可使前向过程的运算量减半,并且理论上可证明SAF分别与脉冲表示和OTTT具有一致性。此外,我们通过实验验证了上述结论,并证明在保持精度的同时能够减少存储和训练时间。