The ability of deep neural networks (DNNs) come from extracting and interpreting features from the data provided. By exploiting intermediate features in DNNs instead of relying on hard labels, we craft adversarial perturbation that generalize more effectively, boosting black-box transferability. These features ubiquitously come from supervised learning in previous work. Inspired by the exceptional synergy between self-supervised learning and the Transformer architecture, this paper explores whether exploiting self-supervised Vision Transformer (ViT) representations can improve adversarial transferability. We present dSVA -- a generative dual self-supervised ViT features attack, that exploits both global structural features from contrastive learning (CL) and local textural features from masked image modeling (MIM), the self-supervised learning paradigm duo for ViTs. We design a novel generative training framework that incorporates a generator to create black-box adversarial examples, and strategies to train the generator by exploiting joint features and the attention mechanism of self-supervised ViTs. Our findings show that CL and MIM enable ViTs to attend to distinct feature tendencies, which, when exploited in tandem, boast great adversarial generalizability. By disrupting dual deep features distilled by self-supervised ViTs, we are rewarded with remarkable black-box transferability to models of various architectures that outperform state-of-the-arts. Code available at https://github.com/spencerwooo/dSVA.
翻译:深度神经网络(DNNs)的能力源于从所提供数据中提取和解释特征。通过利用DNNs中的中间特征而非依赖硬标签,我们构建了更具泛化性的对抗性扰动,从而提升了黑盒可迁移性。在先前工作中,这些特征普遍来源于监督学习。受自监督学习与Transformer架构间卓越协同作用的启发,本文探讨了利用自监督视觉Transformer(ViT)表征是否能改善对抗可迁移性。我们提出了dSVA——一种生成式双自监督ViT特征攻击方法,它同时利用了对比学习(CL)提取的全局结构特征和掩码图像建模(MIM)提取的局部纹理特征,这两者是ViTs的自监督学习范式组合。我们设计了一种新颖的生成式训练框架,包含一个生成器用于创建黑盒对抗样本,以及通过联合利用自监督ViTs的融合特征和注意力机制来训练该生成器的策略。我们的研究结果表明,CL和MIM使ViTs能够关注不同的特征倾向,当协同利用时,这些特征展现出强大的对抗泛化能力。通过干扰自监督ViTs提炼的双重深度特征,我们在多种架构模型上实现了显著优于现有技术的黑盒可迁移性。代码发布于https://github.com/spencerwooo/dSVA。