Deep Neural Networks (DNNs) have been successful in solving real-world tasks in domains such as connected and automated vehicles, disease, and job hiring. However, their implications are far-reaching in critical application areas. Hence, there is a growing concern regarding the potential bias and robustness of these DNN models. A transparency and robust model is always demanded in high-stakes domains where reliability and safety are enforced, such as healthcare and finance. While most studies have focused on adversarial image attack scenarios, fewer studies have investigated the robustness of DNN models in natural language processing (NLP) due to their adversarial samples are difficult to generate. To address this gap, we propose a word-level NLP classifier attack model called "AED," which stands for Attention mechanism enabled post-model Explanation with Density peaks clustering algorithm for synonyms search and substitution. AED aims to test the robustness of NLP DNN models by interpretability their weaknesses and exploring alternative ways to optimize them. By identifying vulnerabilities and providing explanations, AED can help improve the reliability and safety of DNN models in critical application areas such as healthcare and automated transportation. Our experiment results demonstrate that compared with other existing models, AED can effectively generate adversarial examples that can fool the victim model while maintaining the original meaning of the input.
翻译:深度神经网络(DNNs)在联网自动驾驶汽车、疾病诊断和招聘等领域的实际任务中取得了成功。然而,它们在关键应用领域的影响深远。因此,人们日益关注这些DNN模型潜在的偏见和鲁棒性。在医疗健康和金融等对可靠性与安全性有严格要求的高风险领域,始终需要透明且鲁棒的模型。尽管大多数研究集中于对抗性图像攻击场景,但由于对抗样本难以生成,针对自然语言处理(NLP)中DNN模型鲁棒性的研究相对较少。为填补这一空白,我们提出了一种词级NLP分类器攻击模型“AED”,其全称为“基于注意力机制的后模型解释与密度峰值聚类算法的同义词搜索与替换”。AED旨在通过解释NLP DNN模型的弱点并探索优化它们的替代方法,以测试其鲁棒性。通过识别漏洞并提供解释,AED有助于提升DNN模型在医疗健康和自动化交通等关键应用领域的可靠性与安全性。我们的实验结果表明,与现有其他模型相比,AED能有效生成在保持输入原意的同时欺骗受害模型的对抗样本。