Stochastic computing (SC) offers significant reductions in hardware complexity for traditional convolutional neural networks(CNNs). However, despite its advantages, stochastic computing neural networks (SCNNs) often suffer from high resource consumption due to components such as stochastic number generators (SNGs) and accumulative parallel counters (APCs), which limit overall performance. This paper proposes a novel SCNN architecture leveraging reconfigurable field-effect transistors (RFETs). The inherent reconfigurability at the device level enables the design of highly efficient and compact SNGs, APCs, and other related essential components. Furthermore, a dedicated SCNN accelerator architecture is developed to facilitate system-level simulation. Based on accessible open-source standard cell libraries, experimental results demonstrate that the proposed RFET-based SCNN accelerator achieves significant reductions in area, latency, and energy consumption compared to its FinFET-based counterpart at the same technology node.
翻译:随机计算(SC)为传统卷积神经网络(CNN)提供了显著降低硬件复杂度的优势。然而,尽管存在这些优点,随机计算神经网络(SCNN)通常因随机数生成器(SNG)和累加并行计数器(APC)等组件而遭受高资源消耗,这限制了整体性能。本文提出了一种利用可重构场效应晶体管(RFET)的新型SCNN架构。器件层面固有的可重构性使得能够设计出高效且紧凑的SNG、APC及其他相关关键组件。此外,开发了一种专用的SCNN加速器架构以促进系统级仿真。基于可获取的开源标准单元库,实验结果表明,在相同技术节点下,与基于FinFET的对应方案相比,所提出的基于RFET的SCNN加速器在面积、延迟和能耗方面均实现了显著降低。