Spiking Neural Networks (SNN) are characterised by their unique temporal dynamics, but the properties and advantages of such computations are still not well understood. In order to provide answers, in this work we demonstrate how Spiking neurons can enable temporal feature extraction in feed-forward neural networks without the need for recurrent synapses, and how recurrent SNNs can achieve comparable results to LSTM with a smaller number of parameters. This shows how their bio-inspired computing principles can be successfully exploited beyond energy efficiency gains and evidences their differences with respect to conventional artificial neural networks. These results are obtained through a new task, DVS-Gesture-Chain (DVS-GC), which allows, for the first time, to evaluate the perception of temporal dependencies in a real event-based action recognition dataset. Our study proves how the widely used DVS Gesture benchmark can be solved by networks without temporal feature extraction when its events are accumulated in frames, unlike the new DVS-GC which demands an understanding of the order in which events happen. Furthermore, this setup allowed us to reveal the role of the leakage rate in spiking neurons for temporal processing tasks and demonstrated the benefits of "hard reset" mechanisms. Additionally, we also show how time-dependent weights and normalization can lead to understanding order by means of temporal attention.
翻译:脉冲神经网络(SNN)以其独特的时间动态特性为特征,但此类计算的性质与优势尚未得到充分理解。为探究这一问题,本研究展示了脉冲神经元如何在无需循环突触的前馈神经网络中实现时序特征提取,以及循环SNN如何能以更少的参数达到与LSTM相当的性能。这证明了其受生物启发的计算原理可成功应用于超越能效增益的范畴,并揭示了其与传统人工神经网络的根本差异。这些结论通过一项新任务——DVS-Gesture-Chain(DVS-GC)得以验证,该任务首次实现了在真实事件驱动的动作识别数据集中评估时序依赖感知能力。研究表明,广泛使用的DVS Gesture基准测试可通过将事件累积为帧的方式被无需时序特征提取的网络解决,而新提出的DVS-GC则要求理解事件发生的时序关系。此外,该实验框架揭示了脉冲神经元泄漏率在时序处理任务中的作用,并验证了"硬重置"机制的优势。同时,我们还展示了时序依赖权重与归一化如何通过时间注意力机制实现对事件顺序的理解。