Deep neural networks (DNNs) are a type of artificial intelligence models that are inspired by the structure and function of the human brain, designed to process and learn from large amounts of data, making them particularly well-suited for tasks such as image and speech recognition. However, applications of DNNs are experiencing emerging growth due to the deployment of specialized accelerators such as the Google Tensor Processing Units (TPUs). In large-scale deployments, the energy efficiency of such accelerators may become a critical concern. In the voltage overscaling (VOS) technique, the operating voltage of the system is scaled down beyond the nominal operating voltage, which increases the energy efficiency and lifetime of digital circuits. The VOS technique is usually performed without changing the frequency resulting in timing errors. However, some applications such as multimedia processing, including DNNs, have intrinsic resilience against errors and noise. In this paper, we exploit the inherent resilience of DNNs to propose a quality-aware voltage overscaling framework for TPUs, named X-TPU, which offers higher energy efficiency and lifetime compared to conventional TPUs. The X-TPU framework is composed of two main parts, a modified TPU architecture that supports a runtime voltage overscaling, and a statistical error modeling-based algorithm to determine the voltage of neurons such that the output quality is retained above a given user-defined quality threshold. We synthesized a single-neuron architecture using a 15-nm FinFET technology under various operating voltage levels. Then, we extracted different statistical error models for a neuron corresponding to those voltage levels. Using these models and the proposed algorithm, we determined the appropriate voltage of each neuron. Results show that running a DNN on X-TPU can achieve 32% energy saving for only 0.6% accuracy loss.
翻译:深度神经网络(DNNs)是一类受人类大脑结构与功能启发的人工智能模型,旨在处理并学习海量数据,使其特别适用于图像与语音识别等任务。然而,随着专用加速器(如谷歌张量处理单元,TPUs)的部署,DNNs的应用正经历显著增长。在大规模部署中,此类加速器的能效可能成为关键问题。电压超缩放(VOS)技术通过将系统工作电压降至标称工作电压以下,以提高数字电路的能效与寿命。VOS技术通常在保持频率不变的情况下实施,这会导致时序错误。然而,包括DNNs在内的多媒体处理等应用对错误与噪声具有内在的容错性。本文利用DNNs的固有容错性,提出了一种面向TPUs的质量感知电压超缩放框架,命名为X-TPU,该框架相比传统TPUs能提供更高的能效与寿命。X-TPU框架由两个主要部分组成:一是支持运行时电压超缩放的改进型TPU架构,二是基于统计误差建模的算法,用于确定神经元的工作电压,从而将输出质量保持在用户设定的质量阈值之上。我们采用15纳米FinFET技术,在不同工作电压水平下合成了单神经元架构,并提取了对应各电压水平的神经元统计误差模型。利用这些模型及所提算法,我们确定了每个神经元的适宜电压。实验结果表明,在X-TPU上运行DNN可实现32%的节能,而精度损失仅为0.6%。