The computational complexity of deep learning algorithms has given rise to significant speed and memory challenges for the execution hardware. In energy-limited portable devices, highly efficient processing platforms are indispensable for reproducing the prowess afforded by much bulkier processing platforms. In this work, we present a low-power Leaky Integrate-and-Fire (LIF) neuron design fabricated in TSMC's 28 nm CMOS technology as proof of concept to build an energy-efficient mixed-signal Neuromorphic System-on-Chip (NeuroSoC). The fabricated neuron consumes 1.61 fJ/spike and occupies an active area of 34 $μm^{2}$, leading to a maximum spiking frequency of 300 kHz at 250 mV power supply. These performances are used in a software model to emulate the dynamics of a Spiking Neural Network (SNN). Employing supervised backpropagation and a surrogate gradient technique, the resulting accuracy on the MNIST dataset, using 4-bit post-training quantization stands at 82.5\%. The approach underscores the potential of such ASIC implementation of quantized SNNs to deliver high-performance, energy-efficient solutions to various embedded machine-learning applications.
翻译:深度学习算法的计算复杂性给执行硬件带来了显著的速度和内存挑战。在能量受限的便携式设备中,高效的处理平台对于复现在更庞大处理平台上才能实现的强大能力是不可或缺的。在这项工作中,我们提出了一种低功耗的漏电积分发放(LIF)神经元设计,该设计采用台积电(TSMC)28纳米CMOS工艺制造,作为构建高能效混合信号神经形态片上系统(NeuroSoC)的概念验证。所制造的神经元功耗为1.61 fJ/脉冲,占用有效面积为34 $μm^{2}$,在250 mV电源电压下可实现最高300 kHz的脉冲发放频率。这些性能指标被用于一个软件模型中,以模拟脉冲神经网络(SNN)的动态行为。采用监督式反向传播和替代梯度技术,在MNIST数据集上,使用4位训练后量化得到的准确率为82.5%。该方法凸显了此类量化SNN专用集成电路(ASIC)实现方案在提供高性能、高能效解决方案以应对各种嵌入式机器学习应用方面的潜力。