Enhancing the computational efficiency of on-device Deep Neural Networks (DNNs) remains a significant challengein mobile and edge computing. As we aim to execute increasingly complex tasks with constrained computational resources, much of the research has focused on compressing neural network structures and optimizing systems. Although many studies have focused on compressing neural network structures and parameters or optimizing underlying systems, there has been limited attention on optimizing the fundamental building blocks of neural networks: the neurons. In this study, we deliberate on a simple but important research question: Can we design artificial neurons that offer greater efficiency than the traditional neuron paradigm? Inspired by the threshold mechanisms and the excitation-inhibition balance observed in biological neurons, we propose a novel artificial neuron model, Threshold Neurons. Using Threshold Neurons, we can construct neural networks similar to those with traditional artificial neurons, while significantly reducing hardware implementation complexity. Our extensive experiments validate the effectiveness of neural networks utilizing Threshold Neurons, achieving substantial power savings of 7.51x to 8.19x and area savings of 3.89x to 4.33x at the kernel level, with minimal loss in precision. Furthermore, FPGA-based implementations of these networks demonstrate 2.52x power savings and 1.75x speed enhancements at the system level. The source code will be made available upon publication.
翻译:提升设备端深度神经网络(DNNs)的计算效率,在移动与边缘计算领域仍是一个重大挑战。随着我们致力于在有限的计算资源上执行日益复杂的任务,大量研究集中于压缩神经网络结构及优化系统。尽管许多研究聚焦于压缩神经网络结构与参数或优化底层系统,但对神经网络基本构建单元——神经元本身的优化关注有限。在本研究中,我们深入探讨了一个简单但重要的研究问题:能否设计出比传统神经元范式更高效的人工神经元?受生物神经元中观察到的阈值机制与兴奋-抑制平衡的启发,我们提出了一种新型人工神经元模型——阈值神经元。使用阈值神经元,我们可以构建与传统人工神经元相似的神经网络,同时显著降低硬件实现的复杂度。我们的大量实验验证了采用阈值神经元的网络的有效性,在核级实现了7.51倍至8.19倍的显著功耗节约和3.89倍至4.33倍的面积节约,且精度损失极小。此外,基于FPGA实现的这些网络在系统级展示了2.52倍的功耗节约和1.75倍的速度提升。源代码将在论文发表时公开。