Enhancing the computational efficiency of on-device Deep Neural Networks (DNNs) remains a significant challengein mobile and edge computing. As we aim to execute increasingly complex tasks with constrained computational resources, much of the research has focused on compressing neural network structures and optimizing systems. Although many studies have focused on compressing neural network structures and parameters or optimizing underlying systems, there has been limited attention on optimizing the fundamental building blocks of neural networks: the neurons. In this study, we deliberate on a simple but important research question: Can we design artificial neurons that offer greater efficiency than the traditional neuron paradigm? Inspired by the threshold mechanisms and the excitation-inhibition balance observed in biological neurons, we propose a novel artificial neuron model, Threshold Neurons. Using Threshold Neurons, we can construct neural networks similar to those with traditional artificial neurons, while significantly reducing hardware implementation complexity. Our extensive experiments validate the effectiveness of neural networks utilizing Threshold Neurons, achieving substantial power savings of 7.51x to 8.19x and area savings of 3.89x to 4.33x at the kernel level, with minimal loss in precision. Furthermore, FPGA-based implementations of these networks demonstrate 2.52x power savings and 1.75x speed enhancements at the system level. The source code will be made available upon publication.
翻译:提升设备端深度神经网络的计算效率在移动与边缘计算中仍是一项重大挑战。随着我们旨在利用受限的计算资源执行日益复杂的任务,大量研究聚焦于压缩神经网络结构与优化系统。尽管许多研究致力于压缩神经网络结构与参数或优化底层系统,但对于优化神经网络的基本构建单元——神经元——的关注却相对有限。在本研究中,我们深入探讨了一个简单但重要的研究问题:能否设计出比传统神经元范式更高效的人工神经元?受生物神经元中观察到的阈值机制与兴奋-抑制平衡的启发,我们提出了一种新的人工神经元模型——阈值神经元。使用阈值神经元,我们可以构建类似于传统人工神经元的神经网络,同时显著降低硬件实现的复杂度。我们的大量实验验证了采用阈值神经元的神经网络的有效性,在核心层级实现了7.51倍至8.19倍的显著功耗节省与3.89倍至4.33倍的面积节省,且精度损失极小。此外,基于FPGA实现的这些网络在系统层级展示了2.52倍的功耗节省与1.75倍的速度提升。源代码将在论文发表时公开。