In deep networks, operations such as ReLU and hardware-driven clamping often cause activations to accumulate near the edges of the distribution, leading to biased clustering and suboptimal quantization in existing nonlinear (NL) quantization methods. This paper introduces Boundary Suppressed K-Means Quantization (BS-KMQ), a novel NL quantization approach designed to reduce the resolution requirements of analog-to-digital converters (ADCs) in in-memory computing (IMC) systems. By suppressing boundary outliers before clustering, BS-KMQ achieves more balanced and informative NL quantization levels. The resulting NL references are implemented using a reconfigurable in-memory NL-ADC, achieving a 7x area improvement over prior NL-ADC designs. When evaluated on ResNet-18, VGG-16, Inception-V3, and DistilBERT, BS-KMQ achieves at least 3x lower quantization error compared to linear, Lloyd-Max, cumulative distribution function (CDF), and K-means methods. It also improves post-training quantization accuracy by up to 66.8%, 25.4%, 66.6%, and 67.7%, respectively, compared to linear quantization. After low-bit fine-tuning, BS-KMQ maintains competitive accuracy with significantly fewer NL-ADC levels (3/3/4/4b). System-level simulations on ResNet-18 (6/2/3b) demonstrate up to a 4x speedup and 24x energy efficiency improvement over existing IMC accelerators.
翻译:在深度网络中,ReLU等操作及硬件驱动的钳位常导致激活值在分布边缘附近累积,从而引起现有非线性量化方法中的有偏聚类和次优量化。本文提出边界抑制K均值量化,这是一种新颖的非线性量化方法,旨在降低存内计算系统中模数转换器的分辨率需求。通过在聚类前抑制边界离群值,BS-KMQ实现了更平衡且信息量更丰富的非线性量化层级。所得非线性参考值通过可重构存内非线性ADC实现,相比现有非线性ADC设计实现了7倍的面积优化。在ResNet-18、VGG-16、Inception-V3和DistilBERT上的评估表明,BS-KMQ相比线性、Lloyd-Max、累积分布函数和K均值方法至少降低3倍量化误差。与线性量化相比,其训练后量化精度分别提升达66.8%、25.4%、66.6%和67.7%。经过低位微调后,BS-KMQ在显著减少非线性ADC层级(3/3/4/4位)的情况下仍保持竞争力精度。在ResNet-18(6/2/3位)上的系统级仿真显示,相比现有存内计算加速器可实现高达4倍的速度提升和24倍的能效改进。