Efficient parallel computing has become a pivotal element in advancing artificial intelligence. Yet, the deployment of Spiking Neural Networks (SNNs) in this domain is hampered by their inherent sequential computational dependency. This constraint arises from the need for each time step's processing to rely on the preceding step's outcomes, significantly impeding the adaptability of SNN models to massively parallel computing environments. Addressing this challenge, our paper introduces the innovative Parallel Spiking Unit (PSU) and its two derivatives, the Input-aware PSU (IPSU) and Reset-aware PSU (RPSU). These variants skillfully decouple the leaky integration and firing mechanisms in spiking neurons while probabilistically managing the reset process. By preserving the fundamental computational attributes of the spiking neuron model, our approach enables the concurrent computation of all membrane potential instances within the SNN, facilitating parallel spike output generation and substantially enhancing computational efficiency. Comprehensive testing across various datasets, including static and sequential images, Dynamic Vision Sensor (DVS) data, and speech datasets, demonstrates that the PSU and its variants not only significantly boost performance and simulation speed but also augment the energy efficiency of SNNs through enhanced sparsity in neural activity. These advancements underscore the potential of our method in revolutionizing SNN deployment for high-performance parallel computing applications.
翻译:高效并行计算已成为推动人工智能发展的关键要素。然而,脉冲神经网络(SNN)在该领域的应用受制于其固有的序列计算依赖性。这种限制源于每个时间步的处理必须依赖前一步的计算结果,严重阻碍了SNN模型适应大规模并行计算环境的能力。针对这一挑战,本文创新性地提出了并行脉冲单元(PSU)及其两种衍生变体——输入感知PSU(IPSU)与重置感知PSU(RPSU)。这些变体通过概率性管理重置过程,巧妙解耦了脉冲神经元的漏积分与发放机制。在保留脉冲神经元模型基本计算特性的前提下,本方法实现了SNN中所有膜电位实例的并发计算,支持并行脉冲输出生成,显著提升了计算效率。涵盖静态图像、序列图像、动态视觉传感器(DVS)数据及语音数据集的多项综合测试表明,PSU及其变体不仅大幅提升了性能与仿真速度,更通过增强神经活动的稀疏性提高了SNN的能效。这些突破彰显了该方法在推动SNN部署于高性能并行计算应用中的变革潜力。