This study investigates a counterintuitive phenomenon in adversarial machine learning: the potential for noise-based defenses to inadvertently aid evasion attacks in certain scenarios. While randomness is often employed as a defensive strategy against adversarial examples, our research reveals that this approach can sometimes backfire, particularly when facing adaptive attackers using reinforcement learning (RL). Our findings show that in specific cases, especially with visually noisy classes, the introduction of noise in the classifier's confidence values can be exploited by the RL attacker, leading to a significant increase in evasion success rates. In some instances, the noise-based defense scenario outperformed other strategies by up to 20\% on a subset of classes. However, this effect was not consistent across all classifiers tested, highlighting the complexity of the interaction between noise-based defenses and different models. These results suggest that in some cases, noise-based defenses can inadvertently create an adversarial training loop beneficial to the RL attacker. Our study emphasizes the need for a more nuanced approach to defensive strategies in adversarial machine learning, particularly in safety-critical applications. It challenges the assumption that randomness universally enhances defense against evasion attacks and highlights the importance of considering adaptive, RL-based attackers when designing robust defense mechanisms.
翻译:本研究探讨了对抗性机器学习中一个反直觉的现象:基于噪声的防御策略在某些场景下可能无意中助长逃避攻击。虽然随机性常被用作对抗对抗性样本的防御策略,但我们的研究表明,这种方法有时可能适得其反,尤其是在面对使用强化学习(RL)的自适应攻击者时。我们的发现表明,在特定情况下,尤其是在视觉噪声较多的类别中,分类器置信度值中引入的噪声可能被RL攻击者利用,导致逃避成功率显著提高。在某些实例中,基于噪声的防御策略在部分类别上的表现甚至优于其他策略高达20%。然而,这种效应在所有测试的分类器中并不一致,突显了基于噪声的防御策略与不同模型之间相互作用的复杂性。这些结果表明,在某些情况下,基于噪声的防御可能无意中形成一个对RL攻击者有利的对抗性训练循环。我们的研究强调,在对抗性机器学习中,尤其是在安全关键应用中,需要采取更细致的防御策略方法。它挑战了随机性普遍增强对抗逃避攻击防御的假设,并强调了在设计鲁棒防御机制时考虑基于RL的自适应攻击者的重要性。