Deep neural networks (DNNs) are notoriously vulnerable to adversarial attacks that place carefully crafted perturbations on normal examples to fool DNNs. To better understand such attacks, a characterization of the features carried by adversarial examples is needed. In this paper, we tackle this challenge by inspecting the subspaces of sample features through spectral analysis. We first empirically show that the features of either clean signals or adversarial perturbations are redundant and span in low-dimensional linear subspaces respectively with minimal overlap, and the classical low-dimensional subspace projection can suppress perturbation features out of the subspace of clean signals. This makes it possible for DNNs to learn a subspace where only features of clean signals exist while those of perturbations are discarded, which can facilitate the distinction of adversarial examples. To prevent the residual perturbations that is inevitable in subspace learning, we propose an independence criterion to disentangle clean signals from perturbations. Experimental results show that the proposed strategy enables the model to inherently suppress adversaries, which not only boosts model robustness but also motivates new directions of effective adversarial defense.
翻译:深度神经网络(DNNs) notoriously 易受对抗攻击影响,此类攻击通过在正常样本上精心构造扰动来欺骗DNNs。为更深入理解此类攻击,需对对抗样本所携带的特征进行表征。本文通过谱分析探究样本特征的子空间,以应对这一挑战。我们首先通过实验证明,干净信号或对抗扰动的特征具有冗余性,且分别张成低维线性子空间,两者重叠极小;经典的低维子空间投影可抑制干净信号子空间之外的扰动特征。这使DNNs能够学习一个仅包含干净信号特征而摒弃扰动特征的子空间,从而有助于区分对抗样本。为抑制子空间学习中不可避免的残余扰动,我们提出一种独立性准则,将干净信号与扰动相解耦。实验结果表明,所提策略能使模型从本质上抑制对抗样本,不仅提升了模型鲁棒性,还为有效的对抗防御开拓了新方向。