Natural language processing (NLP) has received unprecedented attention. While advancements in NLP models have led to extensive research into their backdoor vulnerabilities, the potential for these advancements to introduce new backdoor threats remains unexplored. This paper proposes Imperio, which harnesses the language understanding capabilities of NLP models to enrich backdoor attacks. Imperio provides a new model control experience. Demonstrated through controlling image classifiers, it empowers the adversary to manipulate the victim model with arbitrary output through language-guided instructions. This is achieved using a language model to fuel a conditional trigger generator, with optimizations designed to extend its language understanding capabilities to backdoor instruction interpretation and execution. Our experiments across three datasets, five attacks, and nine defenses confirm Imperio's effectiveness. It can produce contextually adaptive triggers from text descriptions and control the victim model with desired outputs, even in scenarios not encountered during training. The attack reaches a high success rate across complex datasets without compromising the accuracy of clean inputs and exhibits resilience against representative defenses.
翻译:自然语言处理(NLP)已获得前所未有的关注。尽管NLP模型的进步促使其后门漏洞研究广泛开展,但这些技术进步可能引入新后门威胁的潜在可能性仍未被探索。本文提出Imperio,利用NLP模型的语言理解能力强化后门攻击,提供全新的模型控制体验。通过控制图像分类器的实例验证,Imperio使攻击者能够通过语言引导指令操控受害者模型生成任意输出。该方法通过语言模型驱动条件触发器生成器,并设计优化策略将其语言理解能力扩展至后门指令的解析与执行。我们在三个数据集、五种攻击和九种防御方法上的实验证实了Imperio的有效性。它能基于文本描述生成上下文自适应的触发器,并在训练未见的场景中控制受害者模型产生期望输出。该攻击在复杂数据集上达到高成功率且不影响干净输入精度,并展现出对代表性防御方法的鲁棒性。