This work proposes a unified three-stage framework that produces a quantized DNN with balanced fault and attack robustness. The first stage improves attack resilience via fine-tuning that desensitizes feature representations to small input perturbations. The second stage reinforces fault resilience through fault-aware fine-tuning under simulated bit-flip faults. Finally, a lightweight post-training adjustment integrates quantization to enhance efficiency and further mitigate fault sensitivity without degrading attack resilience. Experiments on ResNet18, VGG16, EfficientNet, and Swin-Tiny in CIFAR-10, CIFAR-100, and GTSRB show consistent gains of up to 10.35% in attack resilience and 12.47% in fault resilience, while maintaining competitive accuracy in quantized networks. The results also highlight an asymmetric interaction in which improvements in fault resilience generally increase resilience to adversarial attacks, whereas enhanced adversarial resilience does not necessarily lead to higher fault resilience.
翻译:本文提出了一种统一的三阶段框架,用于生成兼具故障鲁棒性与攻击鲁棒性的量化深度神经网络。第一阶段通过微调降低特征表示对微小输入扰动的敏感性,从而提升攻击鲁棒性。第二阶段在模拟位翻转故障下进行故障感知微调,以增强故障鲁棒性。最后,通过轻量级训练后调整集成量化操作,在保持攻击鲁棒性的同时提升效率并进一步降低故障敏感性。在CIFAR-10、CIFAR-100和GTSRB数据集上对ResNet18、VGG16、EfficientNet和Swin-Tiny模型的实验表明,量化网络在攻击鲁棒性方面最高提升10.35%,故障鲁棒性最高提升12.47%,同时保持具有竞争力的准确率。实验结果还揭示了一种非对称交互现象:故障鲁棒性的提升通常会增强对抗攻击的鲁棒性,而对抗鲁棒性的增强未必能提高故障鲁棒性。