Despite the success of CNN models on a variety of Image classification and segmentation tasks, their extensive computational and storage demands pose considerable challenges for real-world deployment on resource constrained devices. Quantization is one technique that aims to alleviate these large storage requirements and speed up the inference process by reducing the precision of model parameters to lower-bit representations. In this paper, we introduce a novel post-training quantization method for model weights. Our method finds optimal clipping thresholds and scaling factors along with mathematical guarantees that our method minimizes quantization noise. Empirical results on Real World Datasets demonstrate that our quantization scheme significantly reduces model size and computational requirements while preserving model accuracy.
翻译:尽管CNN模型在多种图像分类与分割任务中取得了成功,但其庞大的计算与存储需求给资源受限设备上的实际部署带来了巨大挑战。量化是一种旨在缓解这些高存储需求并加速推理过程的技术,其通过将模型参数精度降低为低位表示来实现。本文提出了一种针对模型权重的后训练量化新方法。我们的方法能够寻找最优的裁剪阈值与缩放因子,并给出数学证明,确保该方法能最小化量化噪声。在真实世界数据集上的实验结果表明,我们的量化方案在保持模型精度的同时,显著降低了模型规模与计算需求。