In recent years, adversarial attacks against deep learning-based object detectors in the physical world have attracted much attention. To defend against these attacks, researchers have proposed various defense methods against adversarial patches, a typical form of physically-realizable attack. However, our experiments showed that simply enlarging the patch size could make these defense methods fail. Motivated by this, we evaluated various defense methods against adversarial clothes which have large coverage over the human body. Adversarial clothes provide a good test case for adversarial defense against patch-based attacks because they not only have large sizes but also look more natural than a large patch on humans. Experiments show that all the defense methods had poor performance against adversarial clothes in both the digital world and the physical world. In addition, we crafted a single set of clothes that broke multiple defense methods on Faster R-CNN. The set achieved an Attack Success Rate (ASR) of 96.06% against the undefended detector and over 64.84% ASRs against nine defended models in the physical world, unveiling the common vulnerability of existing adversarial defense methods against adversarial clothes. Code is available at: https://github.com/weiz0823/adv-clothes-break-multiple-defenses.
翻译:近年来,针对现实世界中基于深度学习的目标检测器的对抗攻击引起了广泛关注。为防御此类攻击,研究人员提出了多种针对对抗性补丁的防御方法,而对抗性补丁是物理上可实现的典型攻击形式。然而,我们的实验表明,仅通过增大补丁尺寸就可使这些防御方法失效。受此启发,我们评估了多种针对覆盖人体面积较大的对抗性衣物的防御方法。对抗性衣物为基于补丁攻击的对抗防御提供了一个良好的测试案例,因为它们不仅尺寸较大,而且相比人体上的大块补丁看起来更为自然。实验表明,所有防御方法在数字世界和现实世界中对对抗性衣物均表现不佳。此外,我们制作了一套单一的衣物,成功突破了Faster R-CNN上的多种防御方法。在现实世界中,该套衣物对未加防御的检测器实现了96.06%的攻击成功率(ASR),并对九个已防御模型取得了超过64.84%的ASR,揭示了现有对抗防御方法在面对对抗性衣物时存在的共同脆弱性。代码发布于:https://github.com/weiz0823/adv-clothes-break-multiple-defenses。