Neuromorphic computing (NMC) is increasingly viewed as a low-power alternative to conventional von Neumann architectures such as central processing units (CPUs) and graphics processing units (GPUs), however the computational value proposition has been difficult to define precisely. Here, we propose a computational framework for analyzing NMC algorithms and architectures. Using this framework, we demonstrate that NMC can be analyzed as general-purpose and programmable even though it differs considerably from a conventional stored-program architecture. We show that the time and space scaling of idealized NMC has comparable time and footprint tradeoffs that align with that of a theoretically infinite processor conventional system. In contrast, energy scaling for NMC is significantly different than conventional systems, as NMC energy costs are event-driven. Using this framework, we show that while energy in conventional systems is largely determined by the scheduled operations determined by the structural algorithm graph, the energy of neuromorphic systems scales with the activity of the algorithm, that is the activity trace of the algorithm graph. Without making strong assumptions on NMC or conventional costs, we demonstrate which neuromorphic algorithm formulations can exhibit asymptotically improved energy scaling when activity is sparse and decaying over time. We further use these results to identify which broad algorithm families are more or less suitable for NMC approaches.
翻译:神经形态计算(NMC)日益被视为一种低功耗的替代方案,以取代传统的冯·诺依曼架构,如中央处理器(CPU)和图形处理器(GPU),然而其计算价值主张一直难以精确定义。本文提出了一个用于分析NMC算法与架构的计算框架。利用该框架,我们证明,尽管NMC与传统存储程序架构差异显著,但仍可被分析为通用且可编程的。我们表明,理想化NMC的时间与空间缩放具有可比的时间与面积权衡,这与理论上无限处理器的传统系统相一致。相比之下,NMC的能量缩放与传统系统有显著不同,因为NMC的能量成本是事件驱动的。利用此框架,我们证明,传统系统的能量主要由结构算法图所确定的调度操作决定,而神经形态系统的能量则随算法的活动性(即算法图的活动轨迹)而缩放。在不强加关于NMC或传统系统成本假设的前提下,我们论证了当活动性稀疏且随时间衰减时,哪些神经形态算法公式能够展现出渐进改进的能量缩放特性。我们进一步利用这些结果,识别出哪些广泛的算法族更适合或不那么适合NMC方法。