Large language models (LLMs) that are proved to be very powerful on different NLP tasks. However, there are still many ways to attack the model with very low costs. How to defend the model becomes an important problem. In our work, we treat adversarial attack results as a new (unseen) domain of the model, and we frame the defending problem into how to improve the robustness of the model on the new domain. We focus on the task of conversation entailment, where multi-turn natural language dialogues are the premise, and the transformer model is fine-tuned to predict whether a given hypothesis about the given dialogue is true or false. The adversary would attack the hypothesis to fool the model to make the wrong predictions. We apply synonym-swapping as the attack method. To show the robustness of the model, we implement some fine-tuning strategies and propose the embedding perturbation loss as a method to improve the robustness of the model. Finally, we show the importance of our work by discussing the adversarial attacks in NLP in the real world.
翻译:大型语言模型(LLMs)已被证明在各种自然语言处理任务中具有强大能力。然而,仍存在多种以极低成本攻击模型的方法,因此如何防御模型攻击成为重要问题。在本研究中,我们将对抗攻击结果视为模型的新(未见)领域,并将防御问题转化为如何提升模型在该新领域上的鲁棒性。我们聚焦于对话蕴含任务,其中多轮自然语言对话作为前提,Transformer模型经过微调用于预测给定对话假设的真假。攻击者会篡改假设以误导模型做出错误预测。我们采用同义词替换作为攻击方法。为展示模型鲁棒性,我们实施了若干微调策略,并提出嵌入扰动损失作为提升模型鲁棒性的方法。最后,通过讨论自然语言处理中对抗攻击在现实世界的应用,我们证明了本研究的重要性。