Machine learning models are prone to adversarial attacks, where inputs can be manipulated in order to cause misclassifications. While previous research has focused on techniques like Generative Adversarial Networks (GANs), there's limited exploration of GANs and Synthetic Minority Oversampling Technique (SMOTE) in text and image classification models to perform adversarial attacks. Our study addresses this gap by training various machine learning models and using GANs and SMOTE to generate additional data points aimed at attacking text classification models. Furthermore, we extend our investigation to face recognition models, training a Convolutional Neural Network(CNN) and subjecting it to adversarial attacks with fast gradient sign perturbations on key features identified by GradCAM, a technique used to highlight key image characteristics CNNs use in classification. Our experiments reveal a significant vulnerability in classification models. Specifically, we observe a 20 % decrease in accuracy for the top-performing text classification models post-attack, along with a 30 % decrease in facial recognition accuracy. This highlights the susceptibility of these models to manipulation of input data. Adversarial attacks not only compromise the security but also undermine the reliability of machine learning systems. By showcasing the impact of adversarial attacks on both text classification and face recognition models, our study underscores the urgent need for develop robust defenses against such vulnerabilities.
翻译:机器学习模型易受对抗性攻击的影响,攻击者可通过操纵输入数据导致模型误分类。尽管先前研究已关注生成对抗网络(GANs)等技术,但针对文本与图像分类模型,综合运用GANs与合成少数类过采样技术(SMOTE)实施对抗性攻击的探索仍较为有限。本研究通过训练多种机器学习模型,并利用GANs与SMOTE生成旨在攻击文本分类模型的附加数据点,以填补这一研究空白。此外,我们将研究拓展至人脸识别模型:训练卷积神经网络(CNN)后,基于GradCAM(一种用于凸显CNN分类所依赖关键图像特征的技术)识别的关键特征,采用快速梯度符号扰动对其进行对抗性攻击。实验结果表明分类模型存在显著脆弱性:表现最优的文本分类模型在遭受攻击后准确率下降20%,人脸识别准确率下降30%。这揭示了此类模型对输入数据操纵的高度敏感性。对抗性攻击不仅威胁系统安全,更削弱了机器学习系统的可靠性。通过展示对抗性攻击对文本分类与人脸识别模型的影响,本研究强调了开发针对此类脆弱性的鲁棒防御机制的紧迫性。