Deep Neural Networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding human-imperceptible perturbations to the benign inputs. Simultaneously, adversarial examples exhibit transferability across models, enabling practical black-box attacks. However, existing methods are still incapable of achieving the desired transfer attack performance. In this work, focusing on gradient optimization and consistency, we analyse the gradient elimination phenomenon as well as the local momentum optimum dilemma. To tackle these challenges, we introduce Global Momentum Initialization (GI), providing global momentum knowledge to mitigate gradient elimination. Specifically, we perform gradient pre-convergence before the attack and a global search during this stage. GI seamlessly integrates with existing transfer methods, significantly improving the success rate of transfer attacks by an average of 6.4% under various advanced defense mechanisms compared to the state-of-the-art method. Ultimately, GI demonstrates strong transferability in both image and video attack domains. Particularly, when attacking advanced defense methods in the image domain, it achieves an average attack success rate of 95.4%. The code is available at $\href{https://github.com/Omenzychen/Global-Momentum-Initialization}{https://github.com/Omenzychen/Global-Momentum-Initialization}$.
翻译:深度神经网络(DNNs)易受对抗样本的攻击,此类样本通过向良性输入添加人眼难以察觉的扰动而生成。同时,对抗样本在不同模型间表现出可迁移性,使得实际黑盒攻击成为可能。然而,现有方法仍难以达到理想的迁移攻击性能。本工作中,我们聚焦于梯度优化与一致性,分析了梯度消失现象以及局部动量最优困境。为应对这些挑战,我们提出了全局动量初始化(GI)方法,通过引入全局动量知识以缓解梯度消失。具体而言,我们在攻击前执行梯度预收敛,并在此阶段进行全局搜索。GI能够与现有迁移方法无缝集成,相较于当前最优方法,在各种先进防御机制下将迁移攻击成功率平均提升了6.4%。最终,GI在图像与视频攻击领域均展现出强大的可迁移性。特别是在攻击图像领域的先进防御方法时,其平均攻击成功率可达95.4%。代码已发布于 $\href{https://github.com/Omenzychen/Global-Momentum-Initialization}{https://github.com/Omenzychen/Global-Momentum-Initialization}$。