Neuromorphic computing aims to replicate the brain's capabilities for energy efficient and parallel information processing, promising a solution to the increasing demand for faster and more efficient computational systems. Efficient training of neural networks on neuromorphic hardware requires the development of training algorithms that retain the sparsity of spike-based communication during training. Here, we report on the first implementation of event-based backpropagation on the SpiNNaker2 neuromorphic hardware platform. We use EventProp, an algorithm for event-based backpropagation in spiking neural networks (SNNs), to compute exact gradients using sparse communication of error signals between neurons. Our implementation computes multi-layer networks of leaky integrate-and-fire neurons using discretized versions of the differential equations and their adjoints, and uses event packets to transmit spikes and error signals between network layers. We demonstrate a proof-of-concept of batch-parallelized, on-chip training of SNNs using the Yin Yang dataset, and provide an off-chip implementation for efficient prototyping, hyper-parameter search, and hybrid training methods.
翻译:神经形态计算旨在复现大脑在节能与并行信息处理方面的能力,为日益增长的对更快速、更高效计算系统的需求提供了潜在解决方案。在神经形态硬件上高效训练神经网络,需要开发能够在训练过程中保持基于脉冲的稀疏通信特性的训练算法。本文首次报道了在SpiNNaker2神经形态硬件平台上实现基于事件的反向传播算法。我们采用EventProp——一种用于脉冲神经网络(SNNs)中基于事件的反向传播算法,通过神经元间误差信号的稀疏通信来计算精确梯度。我们的实现采用微分方程及其伴随方程的离散化形式计算多层泄漏积分发放神经元网络,并利用事件数据包在网络层间传输脉冲信号与误差信号。我们使用阴阳数据集展示了脉冲神经网络批量并行化片上训练的概念验证,同时提供了用于高效原型设计、超参数搜索及混合训练方法的片外实现方案。