Spiking Neural Networks (SNNs) are promising bio-inspired third-generation neural networks. Recent research has trained deep SNN models with accuracy on par with Artificial Neural Networks (ANNs). Although the event-driven and sparse nature of SNNs show potential for more energy efficient computation than ANNs, SNN neurons have internal states which evolve over time. Keeping track of SNN states can significantly increase data movement and storage requirements, potentially losing its advantages with respect to ANNs. This paper investigates the energy effects of having neuron states, and how it is influenced by the chosen mapping to realistic hardware architectures with advanced memory hierarchies. Therefore, we develop STEMS, a mapping design space exploration for SNNs. STEMS models SNN's stateful behavior and explores intra-layer and inter-layer mapping optimizations to minimize data movement, considering both spatial and temporal SNN dimensions. Using STEMS, we show up to 12x reduction in off-chip data movement and 5x reduction in energy (on top of intra-layer optimizations), on two event-based vision SNN benchmarks. Finally, neuron states may not be needed for all SNN layers. By optimizing neuron states for one of our benchmarks, we show 20x reduction in neuron states and 1.4x better performance without accuracy loss.
翻译:脉冲神经网络(SNNs)是极具前景的、受生物启发的第三代神经网络。近期研究已能训练出与人工神经网络(ANNs)精度相当的深度SNN模型。尽管SNN的事件驱动与稀疏特性显示出比ANNs更具能效的计算潜力,但SNN神经元具有随时间演化的内部状态。追踪SNN状态会显著增加数据移动与存储需求,可能导致其相对于ANNs的优势丧失。本文研究了神经元状态对能耗的影响,以及该影响如何受到在具有先进内存层次的实际硬件架构上所选择映射方式的影响。为此,我们开发了STEMS,一个面向SNN的映射设计空间探索框架。STEMS对SNN的状态行为进行建模,并探索层内与层间映射优化以最小化数据移动,同时考虑了SNN的空间与时间维度。使用STEMS,我们在两个基于事件的视觉SNN基准测试中,实现了片外数据移动最多减少12倍,以及能耗(在层内优化基础上)最多降低5倍。最后,并非所有SNN层都需要神经元状态。通过对我们其中一个基准测试的神经元状态进行优化,我们在不损失精度的情况下,实现了神经元状态减少20倍,性能提升1.4倍。