Energy efficiency and memory footprint of a convolutional neural network (CNN) implemented on a CNN inference accelerator depend on many factors, including a weight quantization strategy (i.e., data types and bit-widths) and mapping (i.e., placement and scheduling of DNN elementary operations on hardware units of the accelerator). We show that enabling rich mixed quantization schemes during the implementation can open a previously hidden space of mappings that utilize the hardware resources more effectively. CNNs utilizing quantized weights and activations and suitable mappings can significantly improve trade-offs among the accuracy, energy, and memory requirements compared to less carefully optimized CNN implementations. To find, analyze, and exploit these mappings, we: (i) extend a general-purpose state-of-the-art mapping tool (Timeloop) to support mixed quantization, which is not currently available; (ii) propose an efficient multi-objective optimization algorithm to find the most suitable bit-widths and mapping for each DNN layer executed on the accelerator; and (iii) conduct a detailed experimental evaluation to validate the proposed method. On two CNNs (MobileNetV1 and MobileNetV2) and two accelerators (Eyeriss and Simba) we show that for a given quality metric (such as the accuracy on ImageNet), energy savings are up to 37% without any accuracy drop.
翻译:在CNN推理加速器上实现的卷积神经网络(CNN)的能源效率和内存占用受到诸多因素的影响,包括权重量化策略(即数据类型与位宽)以及映射策略(即DNN基本运算在加速器硬件单元上的布置与调度)。我们证明,在实现过程中启用丰富的混合量化方案,可以开辟此前隐藏的映射空间,从而更有效地利用硬件资源。与优化不够细致的CNN实现相比,采用量化权重与激活函数并配合合适映射的CNN,能够在精度、能源与内存需求之间实现显著更优的权衡。为了发现、分析并利用这些映射,我们:(i)扩展当前不支持混合量化的通用先进映射工具(Timeloop)以支持混合量化;(ii)提出一种高效的多目标优化算法,为加速器上执行的每个DNN层寻找最合适的位宽与映射;(iii)进行详细实验评估以验证所提方法。在两种CNN(MobileNetV1和MobileNetV2)与两种加速器(Eyeriss和Simba)上的实验表明,在给定质量指标(如ImageNet数据集上的精度)下,能源节省最高可达37%,且精度无任何损失。