YOLO is a deep neural network (DNN) model presented for robust real-time object detection following the one-stage inference approach. It outperforms other real-time object detectors in terms of speed and accuracy by a wide margin. Nevertheless, since YOLO is developed upon a DNN backbone with numerous parameters, it will cause excessive memory load, thereby deploying it on memory-constrained devices is a severe challenge in practice. To overcome this limitation, model compression techniques, such as quantizing parameters to lower-precision values, can be adopted. As the most recent version of YOLO, YOLOv7 achieves such state-of-the-art performance in speed and accuracy in the range of 5 FPS to 160 FPS that it surpasses all former versions of YOLO and other existing models in this regard. So far, the robustness of several quantization schemes has been evaluated on older versions of YOLO. These methods may not necessarily yield similar results for YOLOv7 as it utilizes a different architecture. In this paper, we conduct in-depth research on the effectiveness of a variety of quantization schemes on the pre-trained weights of the state-of-the-art YOLOv7 model. Experimental results demonstrate that using 4-bit quantization coupled with the combination of different granularities results in ~3.92x and ~3.86x memory-saving for uniform and non-uniform quantization, respectively, with only 2.5% and 1% accuracy loss compared to the full-precision baseline model.
翻译:YOLO是一种基于单阶段推理方法、用于鲁棒实时目标检测的深度神经网络模型。它在速度和精度方面远超其他实时目标检测器。然而,由于YOLO构建于具有大量参数的深度神经网络骨干之上,会导致过高的内存负载,因此在内存受限设备上部署成为实际应用中的严峻挑战。为克服此限制,可采用模型压缩技术,例如将参数量化为低精度值。作为YOLO的最新版本,YOLOv7在5 FPS至160 FPS范围内实现了速度和精度的最先进性能,超越了所有先前版本的YOLO及其他现有模型。目前,已有多种量化方案在旧版YOLO上进行了鲁棒性评估。这些方法对采用不同架构的YOLOv7未必能产生相似效果。本文针对多种量化方案在最先进的YOLOv7预训练权重上的有效性展开深入研究。实验结果表明:采用4位量化并结合不同粒度策略时,均匀量化与非均匀量化分别可实现约3.92倍和约3.86倍的内存节省,相较于全精度基准模型仅产生2.5%和1%的精度损失。