Spiking Neural Networks (SNNs) are biologically-inspired models that are capable of processing information in streams of action potentials. However, simulating and training SNNs is computationally expensive due to the need to solve large systems of coupled differential equations. In this paper, we introduce SparseProp, a novel event-based algorithm for simulating and training sparse SNNs. Our algorithm reduces the computational cost of both the forward and backward pass operations from O(N) to O(log(N)) per network spike, thereby enabling numerically exact simulations of large spiking networks and their efficient training using backpropagation through time. By leveraging the sparsity of the network, SparseProp eliminates the need to iterate through all neurons at each spike, employing efficient state updates instead. We demonstrate the efficacy of SparseProp across several classical integrate-and-fire neuron models, including a simulation of a sparse SNN with one million LIF neurons. This results in a speed-up exceeding four orders of magnitude relative to previous event-based implementations. Our work provides an efficient and exact solution for training large-scale spiking neural networks and opens up new possibilities for building more sophisticated brain-inspired models.
翻译:脉冲神经网络是一种受生物学启发的模型,能够以动作电位流的形式处理信息。然而,由于需要求解大规模耦合微分方程组,模拟和训练SNN的计算成本极高。本文提出SparseProp——一种新颖的基于事件的算法,用于模拟和训练稀疏SNN。该算法将前向和反向传播操作的计算成本从每个网络脉冲的O(N)降低至O(log(N)),从而实现对大型脉冲网络的数值精确模拟及其基于时间反向传播的高效训练。通过利用网络的稀疏性,SparseProp无需在每个脉冲时刻遍历所有神经元,而是采用高效的状态更新机制。我们通过多种经典积分放电神经元模型验证了SparseProp的有效性,包括对含有一百万个LIF神经元的稀疏SNN的模拟。相较于以往基于事件的实现方法,该算法实现了超过四个数量级的加速。本研究为训练大规模脉冲神经网络提供了高效且精确的解决方案,并为构建更复杂的类脑模型开辟了新可能。