Non-uniform quantization, such as power-of-two (PoT) quantization, matches data distributions better than uniform quantization, which reduces the quantization error of Deep Neural Networks (DNNs). PoT quantization also allows bit-shift operations to replace multiplications, but there are limited studies on the efficiency of shift-based accelerators for PoT quantization. Furthermore, existing pipelines for accelerating PoT-quantized DNNs on edge devices are not open-source. In this paper, we first design shift-based processing elements (shift-PE) for different PoT quantization methods and evaluate their efficiency using synthetic benchmarks. Then we design a shift-based accelerator using our most efficient shift-PE and propose PoTAcc, an open-source pipeline for end-to-end acceleration of PoT-quantized DNNs on resource-constrained edge devices. Using PoTAcc, we evaluate the performance of our shift-based accelerator across three DNNs. On average, it achieves a 1.23x speedup and 1.24x energy reduction compared to a multiplier-based accelerator, and a 2.46x speedup and 1.83x energy reduction compared to CPU-only execution. Our code is available at https://github.com/gicLAB/PoTAcc
翻译:非均匀量化,例如二次幂(PoT)量化,比均匀量化更能匹配数据分布,从而降低深度神经网络(DNN)的量化误差。PoT量化还允许用移位操作替代乘法运算,但针对PoT量化的基于移位加速器的效率研究有限。此外,现有在边缘设备上加速PoT量化DNN的流程并非开源。本文首先为不同的PoT量化方法设计了基于移位的处理单元(shift-PE),并使用合成基准测试评估了其效率。然后,我们使用最高效的shift-PE设计了一个基于移位的加速器,并提出了PoTAcc——一个用于在资源受限的边缘设备上对PoT量化DNN进行端到端加速的开源流程。利用PoTAcc,我们评估了基于移位的加速器在三种DNN上的性能。平均而言,与基于乘法的加速器相比,它实现了1.23倍的加速和1.24倍的能耗降低;与仅使用CPU执行相比,实现了2.46倍的加速和1.83倍的能耗降低。我们的代码可在 https://github.com/gicLAB/PoTAcc 获取。