We introduce the concept of deceptive diffusion -- training a generative AI model to produce adversarial images. Whereas a traditional adversarial attack algorithm aims to perturb an existing image to induce a misclassificaton, the deceptive diffusion model can create an arbitrary number of new, misclassified images that are not directly associated with training or test images. Deceptive diffusion offers the possibility of strengthening defence algorithms by providing adversarial training data at scale, including types of misclassification that are otherwise difficult to find. In our experiments, we also investigate the effect of training on a partially attacked data set. This highlights a new type of vulnerability for generative diffusion models: if an attacker is able to stealthily poison a portion of the training data, then the resulting diffusion model will generate a similar proportion of misleading outputs.
翻译:我们提出了欺骗性扩散的概念——训练生成式人工智能模型以产生对抗性图像。传统的对抗攻击算法旨在扰动现有图像以引发误分类,而欺骗性扩散模型能够生成任意数量的、与训练或测试图像无直接关联的全新误分类图像。欺骗性扩散通过大规模提供对抗训练数据(包括其他方法难以发现的误分类类型),为强化防御算法提供了可能。在我们的实验中,我们还研究了在部分受攻击数据集上进行训练的效果。这揭示了生成式扩散模型的一种新型脆弱性:若攻击者能够隐秘地污染部分训练数据,则生成的扩散模型将产生相似比例的误导性输出。