Quantization lowers memory usage, computational requirements, and latency by utilizing fewer bits to represent model weights and activations. In this work, we investigate the generalization properties of quantized neural networks, a characteristic that has received little attention despite its implications on model performance. In particular, first, we develop a theoretical model for quantization in neural networks and demonstrate how quantization functions as a form of regularization. Second, motivated by recent work connecting the sharpness of the loss landscape and generalization, we derive an approximate bound for the generalization of quantized models conditioned on the amount of quantization noise. We then validate our hypothesis by experimenting with over 2000 models trained on CIFAR-10, CIFAR-100, and ImageNet datasets on convolutional and transformer-based models.
翻译:量化通过使用更少的比特表示模型权重和激活值,降低了内存占用、计算需求和延迟。本研究探究了量化神经网络的泛化特性——这一特征因其对模型性能的影响而备受关注却鲜有深入研究。具体而言,我们首先建立了神经网络量化的理论模型,论证了量化如何作为一种正则化形式发挥作用;其次,受近期关于损失景观锐度与泛化能力关联性研究的启发,我们推导出基于量化噪声水平的量化模型泛化近似界。最后,通过在CIFAR-10、CIFAR-100和ImageNet数据集上对基于卷积和Transformer的2000余个模型进行实验,验证了我们的假设。