Deep Neural Networks (DNNs) have been successfully implemented across various signal processing fields, resulting in significant enhancements in performance. However, DNNs generally require substantial computational resources, leading to significant economic costs and posing challenges for their deployment on resource-constrained edge devices. In this study, we take advantage of spiking neural networks (SNNs) and quantization technologies to develop an energy-efficient and lightweight neuromorphic signal processing system. Our system is characterized by two principal innovations: a threshold-adaptive encoding (TAE) method and a quantized ternary SNN (QT-SNN). The TAE method can efficiently encode time-varying analog signals into sparse ternary spike trains, thereby reducing energy and memory demands for signal processing. QT-SNN, compatible with ternary spike trains from the TAE method, quantifies both membrane potentials and synaptic weights to reduce memory requirements while maintaining performance. Extensive experiments are conducted on two typical signal-processing tasks: speech and electroencephalogram recognition. The results demonstrate that our neuromorphic signal processing system achieves state-of-the-art (SOTA) performance with a 94% reduced memory requirement. Furthermore, through theoretical energy consumption analysis, our system shows 7.5x energy saving compared to other SNN works. The efficiency and efficacy of the proposed system highlight its potential as a promising avenue for energy-efficient signal processing.
翻译:深度神经网络(DNNs)已在多个信号处理领域成功应用,显著提升了系统性能。然而,DNNs通常需要大量的计算资源,导致高昂的经济成本,并对其在资源受限的边缘设备上部署构成挑战。本研究利用脉冲神经网络(SNNs)与量化技术,开发了一种高能效、轻量化的神经形态信号处理系统。本系统具有两大核心创新:阈值自适应编码(TAE)方法与量化三元脉冲神经网络(QT-SNN)。TAE方法能够高效地将时变模拟信号编码为稀疏的三元脉冲序列,从而降低信号处理所需的能量与内存开销。QT-SNN与TAE方法生成的三元脉冲序列兼容,通过对膜电位与突触权重进行量化,在保持性能的同时减少了内存需求。我们在两项典型的信号处理任务——语音识别与脑电图识别上进行了大量实验。结果表明,我们的神经形态信号处理系统在将内存需求降低94%的同时,实现了最先进的(SOTA)性能。此外,通过理论能耗分析,本系统相比其他SNN方案可节省7.5倍的能量。所提出系统的高效性与有效性,凸显了其作为高能效信号处理可行路径的巨大潜力。