Attackers can deliberately perturb classifiers' input with subtle noise, altering final predictions. Among proposed countermeasures, adversarial purification employs generative networks to preprocess input images, filtering out adversarial noise. In this study, we propose specific generators, defined Multiple Latent Variable Generative Models (MLVGMs), for adversarial purification. These models possess multiple latent variables that naturally disentangle coarse from fine features. Taking advantage of these properties, we autoencode images to maintain class-relevant information, while discarding and re-sampling any detail, including adversarial noise. The procedure is completely training-free, exploring the generalization abilities of pre-trained MLVGMs on the adversarial purification downstream task. Despite the lack of large models, trained on billions of samples, we show that smaller MLVGMs are already competitive with traditional methods, and can be used as foundation models. Official code released at https://github.com/SerezD/gen_adversarial.
翻译:攻击者可通过在分类器输入中精心添加细微噪声,从而改变最终预测结果。在已提出的对抗防御策略中,对抗净化技术利用生成网络对输入图像进行预处理,以滤除对抗性噪声。本研究提出采用特定生成器——多潜变量生成模型(MLVGMs)——进行对抗净化。该模型具有多个潜变量,能自然解耦图像的粗粒度与细粒度特征。基于此特性,我们通过自编码过程保留与类别相关的信息,同时舍弃并重新采样包括对抗噪声在内的所有细节特征。该方法完全无需训练,主要探索预训练MLVGMs在对抗净化下游任务中的泛化能力。尽管未使用基于数十亿样本训练的大型模型,我们证明较小规模的MLVGMs已具备与传统方法竞争的性能,并可作为基础模型使用。官方代码发布于https://github.com/SerezD/gen_adversarial。