Neural network quantization is a critical technique for deploying models on resource-limited devices. Despite its widespread use, the impact of quantization on model perceptual fields, particularly in relation to class activation maps (CAMs), remains underexplored. This study investigates how quantization influences the spatial recognition abilities of vision models by examining the alignment between CAMs and visual salient objects maps across various architectures. Utilizing a dataset of 10,000 images from ImageNet, we conduct a comprehensive evaluation of six diverse CNN architectures: VGG16, ResNet50, EfficientNet, MobileNet, SqueezeNet, and DenseNet. Through the systematic application of quantization techniques, we identify subtle changes in CAMs and their alignment with Salient object maps. Our results demonstrate the differing sensitivities of these architectures to quantization and highlight its implications for model performance and interpretability in real-world applications. This work primarily contributes to a deeper understanding of neural network quantization, offering insights essential for deploying efficient and interpretable models in practical settings.
翻译:神经网络量化是在资源受限设备上部署模型的关键技术。尽管其应用广泛,量化对模型感知场(特别是与类别激活图(CAMs)相关)的影响仍未得到充分研究。本研究通过考察不同架构下CAMs与视觉显著目标图之间的对齐关系,探究量化如何影响视觉模型的空间识别能力。利用ImageNet数据集中10,000张图像,我们对六种不同的CNN架构进行了全面评估:VGG16、ResNet50、EfficientNet、MobileNet、SqueezeNet和DenseNet。通过系统应用量化技术,我们识别出CAMs及其与显著目标图对齐关系的微妙变化。研究结果表明这些架构对量化具有不同的敏感性,并揭示了量化在实际应用中对于模型性能和可解释性的影响。本工作主要深化了对神经网络量化的理解,为在实际场景中部署高效且可解释的模型提供了重要见解。