Quantization is a crucial technique for deploying deep learning models on resource-constrained devices, such as embedded FPGAs. Prior efforts mostly focus on quantizing matrix multiplications, leaving other layers like BatchNorm or shortcuts in floating-point form, even though fixed-point arithmetic is more efficient on FPGAs. A common practice is to fine-tune a pre-trained model to fixed-point for FPGA deployment, but potentially degrading accuracy. This work presents QFX, a novel trainable fixed-point quantization approach that automatically learns the binary-point position during model training. Additionally, we introduce a multiplier-free quantization strategy within QFX to minimize DSP usage. QFX is implemented as a PyTorch-based library that efficiently emulates fixed-point arithmetic, supported by FPGA HLS, in a differentiable manner during backpropagation. With minimal effort, models trained with QFX can readily be deployed through HLS, producing the same numerical results as their software counterparts. Our evaluation shows that compared to post-training quantization, QFX can quantize models trained with element-wise layers quantized to fewer bits and achieve higher accuracy on both CIFAR-10 and ImageNet datasets. We further demonstrate the efficacy of multiplier-free quantization using a state-of-the-art binarized neural network accelerator designed for an embedded FPGA (AMD Xilinx Ultra96 v2). We plan to release QFX in open-source format.
翻译:量化是深度学习模型在资源受限设备(如嵌入式FPGA)上部署的关键技术。现有研究主要聚焦于矩阵乘法的量化,而将批归一化(BatchNorm)或捷径连接(shortcut)等其他层保留为浮点形式——尽管定点算术在FPGA上具有更高效率。常见做法是将预训练模型微调至定点以实现FPGA部署,但这可能导致精度下降。本文提出QFX方法,这是一种新颖的可训练定点量化方案,能够在模型训练过程中自动学习二进制小数点位置。此外,我们在QFX框架中引入无乘法器量化策略以最小化DSP资源消耗。QFX基于PyTorch库实现,通过FPGA HLS支持的可微方式高效模拟定点算术(反向传播阶段)。使用QFX训练的模型可零代价地通过HLS部署,并产生与软件实现完全一致的数值结果。实验表明,相较训练后量化方法,QFX能对包含逐元素层(其量化比特数更少)的模型进行量化,并在CIFAR-10和ImageNet数据集上取得更高精度。我们进一步通过面向嵌入式FPGA(AMD Xilinx Ultra96 v2)设计的最优二值化神经网络加速器验证了无乘法器量化的有效性。QFX将以开源形式发布。