Spiking Neural Networks (SNNs) promise higher energy efficiency over conventional Quantized Artificial Neural Networks (QNNs) due to their event-driven, spike-based computation. However, prevailing energy evaluations often oversimplify, focusing on computational aspects while neglecting critical overheads like comprehensive data movement and memory access. Such simplifications can lead to misleading conclusions regarding the true energy benefits of SNNs. This paper presents a rigorous re-evaluation. We establish a fair baseline by mapping rate-encoded SNNs with $T$ timesteps to functionally equivalent QNNs with $\lceil \log_2(T+1) \rceil$ bits. This ensures both models have comparable representational capacities, as well has similar hardware requirement, enabling meaningful energy comparisons. We introduce a detailed analytical energy model encompassing core computation and data movement. Using this model, we systematically explore a wide parameter space, including intrinsic network characteristics ($T$, spike rate $\SR$, QNN sparsity $γ$, model size $N$, weight bit-level) and hardware characteristics (memory system and network-on-chip). Our analysis identifies specific operational regimes where SNNs genuinely offer superior energy efficiency. For example, under typical neuromorphic hardware conditions, SNNs with moderate time windows ($T \in [5,10]$) require an average spike rate ($\SR$) below 6.4\% to outperform equivalent QNNs. Furthermore, to illustrate the real-world implications of our findings, we analyze the operational lifetime of a typical smartwatch, showing that an optimized SNN can nearly double its battery life compared to a QNN. These insights guide the design of turely energy-efficient neural network solutions.
翻译:脉冲神经网络(SNNs)因其事件驱动、基于脉冲的计算特性,被认为比传统的量化人工神经网络(QNNs)具有更高的能效。然而,现有的能效评估往往过于简化,主要关注计算方面,而忽略了关键的系统开销,如完整的数据移动和内存访问。这种简化可能导致关于SNNs真实能效优势的误导性结论。本文提出了一项严格的重新评估。我们通过将具有 $T$ 个时间步的速率编码SNN映射到功能等效的、具有 $\lceil \log_2(T+1) \rceil$ 比特的QNN,建立了一个公平的基线。这确保了两种模型具有可比较的表征能力以及相似的硬件要求,从而能够进行有意义的能效比较。我们引入了一个详细的分析性能量模型,涵盖核心计算和数据移动。利用该模型,我们系统地探索了广泛的参数空间,包括内在网络特性($T$、脉冲率 $\SR$、QNN稀疏度 $γ$、模型大小 $N$、权重比特位宽)和硬件特性(内存系统和片上网络)。我们的分析识别了SNNs真正提供优越能效的具体操作区间。例如,在典型的神经形态硬件条件下,具有中等时间窗口($T \in [5,10]$)的SNN需要平均脉冲率($\SR$)低于6.4%才能胜过等效的QNN。此外,为了说明我们研究结果的实际意义,我们分析了一款典型智能手表的运行寿命,表明与QNN相比,优化后的SNN可以使其电池寿命几乎翻倍。这些见解为设计真正高能效的神经网络解决方案提供了指导。