The vulnerability of artificial neural networks to adversarial perturbations in the black-box setting is widely studied in the literature. The majority of attack methods to construct these perturbations suffer from an impractically large number of queries required to find an adversarial example. In this work, we focus on knowledge distillation as an approach to conduct transfer-based black-box adversarial attacks and propose an iterative training of the surrogate model on an expanding dataset. This work is the first, to our knowledge, to provide provable guarantees on the success of knowledge distillation-based attack on classification neural networks: we prove that if the student model has enough learning capabilities, the attack on the teacher model is guaranteed to be found within the finite number of distillation iterations.
翻译:人工神经网络在黑盒设置下对对抗扰动的脆弱性已在文献中得到广泛研究。大多数构建此类扰动的攻击方法需要不切实际的大量查询才能找到对抗样本。在本工作中,我们聚焦于将知识蒸馏作为实施基于迁移的黑盒对抗攻击的途径,并提出在扩展数据集上对代理模型进行迭代训练的方法。据我们所知,本研究首次为基于知识蒸馏的分类神经网络攻击提供了可证明的成功保证:我们证明,若学生模型具备足够的学习能力,则对教师模型的攻击保证能在有限次蒸馏迭代内被发现。