Recently, adversarial attacks on image classification networks by the AutoAttack (Croce and Hein, 2020b) framework have drawn a lot of attention. While AutoAttack has shown a very high attack success rate, most defense approaches are focusing on network hardening and robustness enhancements, like adversarial training. This way, the currently best-reported method can withstand about 66% of adversarial examples on CIFAR10. In this paper, we investigate the spatial and frequency domain properties of AutoAttack and propose an alternative defense. Instead of hardening a network, we detect adversarial attacks during inference, rejecting manipulated inputs. Based on a rather simple and fast analysis in the frequency domain, we introduce two different detection algorithms. First, a black box detector that only operates on the input images and achieves a detection accuracy of 100% on the AutoAttack CIFAR10 benchmark and 99.3% on ImageNet, for epsilon = 8/255 in both cases. Second, a whitebox detector using an analysis of CNN feature maps, leading to a detection rate of also 100% and 98.7% on the same benchmarks.
翻译:最近,AutoAttack(Croce 和 Hein,2020b)框架对图像分类网络的对抗性攻击引起了广泛关注。尽管AutoAttack表现出极高的攻击成功率,但大多数防御方法主要集中在网络加固和鲁棒性增强上,例如对抗训练。通过这种方式,目前报告的最佳方法能够抵御CIFAR10上约66%的对抗样本。本文研究了AutoAttack在空间域和频域的特性,并提出了一种替代性防御方案。我们并非加固网络,而是在推理过程中检测对抗性攻击,从而拒绝被操纵的输入。基于频域中相对简单且快速的分析,我们引入了两种不同的检测算法。首先,一种仅对输入图像进行操作的黑盒检测器,在AutoAttack CIFAR10基准测试中实现了100%的检测准确率,在ImageNet上达到99.3%(两者中epsilon = 8/255)。其次,一种利用CNN特征图分析的白盒检测器,在同一基准测试中的检测率也分别达到100%和98.7%。