An adversarial example is a modified input image designed to cause a Machine Learning (ML) model to make a mistake; these perturbations are often invisible or subtle to human observers and highlight vulnerabilities in a model's ability to generalize from its training data. Several adversarial attacks can create such examples, each with a different perspective, effectiveness, and perceptibility of changes. Conversely, defending against such adversarial attacks improves the robustness of ML models in image processing and other domains of deep learning. Most defence mechanisms require either a level of model awareness, changes to the model, or access to a comprehensive set of adversarial examples during training, which is impractical. Another option is to use an auxiliary model in a preprocessing manner without changing the primary model. This study presents a practical and effective solution -- using predictive coding networks (PCnets) as an auxiliary step for adversarial defence. By seamlessly integrating PCnets into feed-forward networks as a preprocessing step, we substantially bolster resilience to adversarial perturbations. Our experiments on MNIST and CIFAR10 demonstrate the remarkable effectiveness of PCnets in mitigating adversarial examples with about 82% and 65% improvements in robustness, respectively. The PCnet, trained on a small subset of the dataset, leverages its generative nature to effectively counter adversarial efforts, reverting perturbed images closer to their original forms. This innovative approach holds promise for enhancing the security and reliability of neural network classifiers in the face of the escalating threat of adversarial attacks.
翻译:对抗性示例是一种经过修改的输入图像,旨在使机器学习模型产生错误判断;这些扰动对人类观察者而言通常不可见或难以察觉,揭示了模型从训练数据中泛化能力的脆弱性。多种对抗性攻击能够生成此类示例,每种攻击在修改视角、有效性和可感知性方面各不相同。相反,防御此类对抗性攻击能够提升机器学习模型在图像处理及其他深度学习领域的鲁棒性。大多数防御机制需要模型具备一定程度的对抗感知能力、对模型结构进行修改,或在训练过程中使用大量对抗性示例,这些要求往往不切实际。另一种方案是在不改变主模型的前提下,以预处理方式使用辅助模型。本研究提出了一种实用且有效的解决方案——将预测编码网络作为对抗防御的辅助步骤。通过将PCnets无缝集成至前馈网络作为预处理步骤,我们显著增强了对对抗性扰动的抵御能力。在MNIST和CIFAR10数据集上的实验表明,PCnets在缓解对抗性示例方面效果显著,分别使模型鲁棒性提升约82%和65%。PCnets仅需在数据集的小规模子集上进行训练,即可利用其生成特性有效抵消对抗性干扰,使受扰动图像恢复至更接近原始形态的状态。这种创新方法为提升神经网络分类器在对抗性攻击威胁日益严峻背景下的安全性与可靠性提供了新的可能。