This paper introduces a novel approach in neuromorphic computing, integrating heterogeneous hardware nodes into a unified, massively parallel architecture. Our system transcends traditional single-node constraints, harnessing the neural structure and functionality of the human brain to efficiently process complex tasks. We present an architecture that dynamically virtualizes neuromorphic resources, enabling adaptable allocation and reconfiguration for various applications. Our evaluation, using diverse applications and performance metrics, provides significant insights into the system's adaptability and efficiency. We observed scalable throughput increases across configurations of 1, 2, and 4 Virtual Machines (VMs), reaching up to 5.1 Gibibits per second (Gib/s) for different data transfer sizes. This scalability demonstrates the system's capacity to handle tasks that require substantial amounts of data. The energy consumption of our virtualized accelerator environment increased nearly linearly with the addition of more NeuroVM accelerators, ranging from 25 to 45 millijoules (mJ) as the number of accelerators increased from 1 to 20. Further, our investigation of reconfiguration overheads revealed that partial reconfigurations significantly reduce the time spent on reconfigurations compared to full reconfigurations, particularly when there are more virtual machines, as indicated by the logarithmic scale of time measurements.
翻译:本文介绍了一种神经形态计算的新方法,将异构硬件节点集成到一个统一的大规模并行架构中。我们的系统超越了传统的单节点限制,利用人脑的神经结构和功能来高效处理复杂任务。我们提出了一种动态虚拟化神经形态资源的架构,能够为各种应用实现可适应的资源分配与重配置。我们使用多样化的应用和性能指标进行评估,为系统的适应性和效率提供了重要见解。我们观察到在1、2和4个虚拟机(VM)的配置下,吞吐量实现了可扩展的增长,针对不同的数据传输大小,最高可达5.1吉比特每秒(Gib/s)。这种可扩展性证明了系统处理需要大量数据的任务的能力。我们的虚拟化加速器环境的能耗随着更多NeuroVM加速器的增加近乎线性增长,当加速器数量从1增加到20时,能耗范围在25到45毫焦耳(mJ)之间。此外,我们对重配置开销的研究表明,与完全重配置相比,部分重配置显著减少了重配置所花费的时间,尤其是在虚拟机数量较多时,这一点由时间测量的对数尺度所表明。