Adversarial attacks on machine learning algorithms have been a key deterrent to the adoption of AI in many real-world use cases. They significantly undermine the ability of high-performance neural networks by forcing misclassifications. These attacks introduce minute and structured perturbations or alterations in the test samples, imperceptible to human annotators in general, but trained neural networks and other models are sensitive to it. Historically, adversarial attacks have been first identified and studied in the domain of image processing. In this paper, we study adversarial examples in the field of natural language processing, specifically text classification tasks. We investigate the reasons for adversarial vulnerability, particularly in relation to the inherent dimensionality of the model. Our key finding is that there is a very strong correlation between the embedding dimensionality of the adversarial samples and their effectiveness on models tuned with input samples with same embedding dimension. We utilize this sensitivity to design an adversarial defense mechanism. We use ensemble models of varying inherent dimensionality to thwart the attacks. This is tested on multiple datasets for its efficacy in providing robustness. We also study the problem of measuring adversarial perturbation using different distance metrics. For all of the aforementioned studies, we have run tests on multiple models with varying dimensionality and used a word-vector level adversarial attack to substantiate the findings.
翻译:对机器学习算法的对抗攻击一直是人工智能在实际应用场景中推广的关键障碍。这些攻击通过强制引发错误分类,显著削弱了高性能神经网络的能力。攻击手段通常在测试样本中引入微小且结构化的扰动或修改——这些变化对人类标注者而言通常难以察觉,但训练后的神经网络及其他模型对此却十分敏感。历史上,对抗攻击最初在图像处理领域被识别并研究。本文聚焦自然语言处理领域,具体针对文本分类任务中的对抗样本展开研究。我们探讨了对抗脆弱性的成因,特别关注模型固有维度与这一现象的关系。核心发现是:对抗样本的嵌入维度与其对采用相同嵌入维度调参的输入样本的攻击效果存在极强的相关性。基于此敏感性,我们设计了一种对抗防御机制:通过集成不同固有维度的模型来抵御攻击。我们在多个数据集上验证了该方法的鲁棒性提升效果。此外,我们还研究了采用不同距离度量评估对抗扰动的问题。针对上述所有研究,我们均对多个维度各异的模型进行了测试,并采用词向量级别的对抗攻击方法来验证相关发现。