Training transmission delays in spiking neural networks (SNNs) has been shown to substantially improve their performance on complex temporal tasks. In this work, we show that learning either axonal or dendritic delays enables deep feedforward SNNs composed of leaky integrate-and-fire (LIF) neurons to reach accuracy comparable to existing synaptic delay learning approaches, while significantly reducing memory and computational overhead. SNN models with either axonal or dendritic delays achieve up to $95.58\%$ on the Google Speech Command (GSC) and $80.97\%$ on the Spiking Speech Command (SSC) datasets, matching or exceeding prior methods based on synaptic delays or more complex neuron models. By adjusting the delay parameters, we obtain improved performance for synaptic delay learning baselines, strengthening the comparison. We find that axonal delays offer the most favorable trade-off, combining lower buffering requirements with slightly higher accuracy than dendritic delays. We further show that the performance of axonal and dendritic delay models is largely preserved under strong delay sparsity, with as few as $20\%$ of delays remaining active, further reducing buffering requirements. Overall, our results indicate that learnable axonal and dendritic delays provide a resource-efficient and effective mechanism for temporal representation in SNNs. Code will be made available publicly upon acceptance. Code is available at https://github.com/YounesBouhadjar/AxDenSynDelaySNN
翻译:在脉冲神经网络中训练传输延迟已被证明能显著提升其在复杂时序任务上的性能。本研究表明,学习轴突或树突延迟可使由泄漏积分发放神经元构成的前馈脉冲神经网络达到与现有突触延迟学习方法相当的准确率,同时显著降低内存与计算开销。采用轴突或树突延迟的脉冲神经网络模型在Google语音命令数据集上达到$95.58\%$的准确率,在脉冲语音命令数据集上达到$80.97\%$的准确率,匹配或超越了基于突触延迟或更复杂神经元模型的现有方法。通过调整延迟参数,我们改进了突触延迟学习基线的性能,从而强化了对比分析。研究发现,轴突延迟具有最理想的权衡特性,其缓冲需求低于树突延迟且准确率略高。我们进一步证明,在强延迟稀疏性条件下(仅保留$20\%$的活跃延迟),轴突与树突延迟模型的性能基本保持不变,这进一步降低了缓冲需求。总体而言,我们的结果表明可学习的轴突与树突延迟为脉冲神经网络提供了一种资源高效且有效的时序表征机制。代码将在论文录用后公开,当前可通过https://github.com/YounesBouhadjar/AxDenSynDelaySNN获取。