The demand for low-power inference and training of deep neural networks (DNNs) on edge devices has intensified the need for algorithms that are both scalable and energy-efficient. While spiking neural networks (SNNs) allow for efficient inference by processing complex spatio-temporal dynamics in an event-driven fashion, training them on resource-constrained devices remains challenging due to the high computational and memory demands of conventional error backpropagation (BP)-based approaches. In this work, we draw inspiration from biological mechanisms such as eligibility traces, spike-timing-dependent plasticity, and neural activity synchronization to introduce TESS, a temporally and spatially local learning rule for training SNNs. Our approach addresses both temporal and spatial credit assignments by relying solely on locally available signals within each neuron, thereby allowing computational and memory overheads to scale linearly with the number of neurons, independently of the number of time steps. Despite relying on local mechanisms, we demonstrate performance comparable to the backpropagation through time (BPTT) algorithm, within $\sim1.4$ accuracy points on challenging computer vision scenarios relevant at the edge, such as the IBM DVS Gesture dataset, CIFAR10-DVS, and temporal versions of CIFAR10, and CIFAR100. Being able to produce comparable performance to BPTT while keeping low time and memory complexity, TESS enables efficient and scalable on-device learning at the edge.
翻译:在边缘设备上实现深度神经网络(DNN)的低功耗推理与训练的需求,催生了对兼具可扩展性与高能效算法的迫切需求。脉冲神经网络(SNN)通过以事件驱动的方式处理复杂的时空动态,能够实现高效的推理,但由于传统基于误差反向传播(BP)方法的高计算与内存需求,在资源受限的设备上训练SNN仍然具有挑战性。在本工作中,我们借鉴了诸如资格迹、脉冲时序依赖可塑性及神经活动同步等生物机制,提出了TESS——一种用于训练SNN的时空局部学习规则。该方法通过仅依赖每个神经元内部局部可用的信号,同时解决了时间与空间信用分配问题,从而使计算与内存开销随神经元数量线性增长,而与时间步数无关。尽管依赖局部机制,我们在与边缘计算相关的具有挑战性的计算机视觉场景(如IBM DVS Gesture数据集、CIFAR10-DVS以及时序版本的CIFAR10和CIFAR100)上,展示了其性能与通过时间的反向传播(BPTT)算法相当,准确率差距在约1.4个百分点以内。TESS能够在保持较低时间和内存复杂度的同时,实现与BPTT相媲美的性能,从而为边缘设备实现高效且可扩展的端上学习提供了可能。