In guaranteeing the absence of adversarial examples in an instance's neighbourhood, certification mechanisms play an important role in demonstrating neural net robustness. In this paper, we ask if these certifications can compromise the very models they help to protect? Our new \emph{Certification Aware Attack} exploits certifications to produce computationally efficient norm-minimising adversarial examples $74 \%$ more often than comparable attacks, while reducing the median perturbation norm by more than $10\%$. While these attacks can be used to assess the tightness of certification bounds, they also highlight that releasing certifications can paradoxically reduce security.
翻译:在保证实例邻域内不存在对抗样本方面,认证机制对于证明神经网络的鲁棒性具有重要作用。本文探讨这些认证机制是否会损害其本应保护的模型。我们提出的新型"认证感知攻击"利用认证机制,以比同类攻击高出74%的概率生成计算效率最优的范数最小化对抗样本,同时将扰动范数中位数降低超过10%。尽管此类攻击可用于评估认证边界的紧致性,但它们也揭示出:发布认证结果反而可能降低系统安全性。