Spiking Neural Networks (SNNs) are extensively utilized in brain-inspired computing and neuroscience research. To enhance the speed and energy efficiency of SNNs, several many-core accelerators have been developed. However, maintaining the accuracy of SNNs often necessitates frequent explicit synchronization among all cores, which presents a challenge to overall efficiency. In this paper, we propose an asynchronous architecture for Spiking Neural Networks (SNNs) that eliminates the need for inter-core synchronization, thus enhancing speed and energy efficiency. This approach leverages the pre-determined dependencies of neuromorphic cores established during compilation. Each core is equipped with a scheduler that monitors the status of its dependencies, allowing it to safely advance to the next timestep without waiting for other cores. This eliminates the necessity for global synchronization and minimizes core waiting time despite inherent workload imbalances. Comprehensive evaluations using five different SNN workloads show that our architecture achieves a 1.86x speedup and a 1.55x increase in energy efficiency compared to state-of-the-art synchronization architectures.
翻译:脉冲神经网络(SNNs)在类脑计算和神经科学研究中得到了广泛应用。为了提高SNNs的速度和能效,已开发出多种众核加速器。然而,保持SNNs的精度通常需要所有核心之间频繁的显式同步,这对整体效率构成了挑战。本文提出了一种用于脉冲神经网络(SNNs)的异步架构,该架构消除了核间同步的需求,从而提升了速度和能效。该方法利用了在编译阶段确立的神经形态核心之间的预定依赖关系。每个核心配备一个调度器,用于监控其依赖项的状态,使其能够安全地推进到下一个时间步,而无需等待其他核心。这消除了全局同步的必要性,并在固有工作负载不均衡的情况下最大限度地减少了核心等待时间。使用五种不同SNN工作负载进行的全面评估表明,与最先进的同步架构相比,我们的架构实现了1.86倍的加速和1.55倍的能效提升。