Spiking neural networks (SNNs) are biologically inspired, event-driven models that are suitable for processing temporal data and offer energy-efficient computation when implemented on neuromorphic hardware. In SNNs, richer neuronal dynamic allows capturing more complex temporal dependencies, with delays playing a crucial role by allowing past inputs to directly influence present spiking behavior. We propose a general framework for incorporating delays into SNNs through additional state variables. The proposed mechanism enables each neuron to access a finite temporal input history. The framework is agnostic to neuron models and hence can be seamlessly integrated into standard spiking neuron models such as LIF and adLIF. We analyze how the duration of the delays and the learnable parameters associated with them affect the performance. We investigate the trade-offs in the network architecture due to additional state variables introduced by the delay mechanism. Experiments on the Spiking Heidelberg Digits (SHD) dataset show that the proposed mechanism matches the performance of existing delay-based SNNs while remaining computationally efficient. Moreover, the results illustrate that the incorporation of delays may substantially improve performance in smaller networks.
翻译:脉冲神经网络(SNNs)是受生物学启发的、事件驱动的模型,适用于处理时序数据,并在神经形态硬件上实现时提供高能效计算。在SNNs中,更丰富的神经元动态特性允许捕获更复杂的时间依赖性,其中延迟通过允许过去的输入直接影响当前的脉冲行为而起到关键作用。我们提出了一个通用框架,通过引入额外的状态变量将延迟整合到SNNs中。该机制使每个神经元能够访问有限的时序输入历史。该框架与神经元模型无关,因此可以无缝集成到标准脉冲神经元模型中,如LIF和adLIF。我们分析了延迟的持续时间及其相关可学习参数如何影响性能。我们研究了由于延迟机制引入的额外状态变量在网络架构中产生的权衡。在Spiking Heidelberg Digits(SHD)数据集上的实验表明,所提出的机制在保持计算效率的同时,与现有基于延迟的SNNs性能相当。此外,结果说明延迟的整合可能显著提升较小网络的性能。