Driven by the rapid development of Large Language Models (LLMs), LLM-based agents have been developed to handle various real-world applications, including finance, healthcare, and shopping, etc. It is crucial to ensure the reliability and security of LLM-based agents during applications. However, the safety issues of LLM-based agents are currently under-explored. In this work, we take the first step to investigate one of the typical safety threats, backdoor attack, to LLM-based agents. We first formulate a general framework of agent backdoor attacks, then we present a thorough analysis of different forms of agent backdoor attacks. Specifically, compared with traditional backdoor attacks on LLMs that are only able to manipulate the user inputs and model outputs, agent backdoor attacks exhibit more diverse and covert forms: (1) From the perspective of the final attacking outcomes, the agent backdoor attacker can not only choose to manipulate the final output distribution, but also introduce the malicious behavior in an intermediate reasoning step only, while keeping the final output correct. (2) Furthermore, the former category can be divided into two subcategories based on trigger locations, in which the backdoor trigger can either be hidden in the user query or appear in an intermediate observation returned by the external environment. We implement the above variations of agent backdoor attacks on two typical agent tasks including web shopping and tool utilization. Extensive experiments show that LLM-based agents suffer severely from backdoor attacks and such backdoor vulnerability cannot be easily mitigated by current textual backdoor defense algorithms. This indicates an urgent need for further research on the development of targeted defenses against backdoor attacks on LLM-based agents. Warning: This paper may contain biased content.
翻译:随着大型语言模型(LLM)的快速发展,基于LLM的智能体已被开发用于处理各种现实世界应用,包括金融、医疗和购物等。确保基于LLM的智能体在应用过程中的可靠性与安全性至关重要。然而,目前针对基于LLM的智能体的安全性问题尚未得到充分探索。在本工作中,我们率先探究了针对基于LLM的智能体的典型安全威胁之一——后门攻击。我们首先构建了一个通用的智能体后门攻击框架,随后对不同形式的智能体后门攻击进行了全面分析。具体而言,与仅能操作用户输入和模型输出的传统LLM后门攻击相比,智能体后门攻击呈现出更加多样和隐蔽的形式:(1)从最终攻击效果的角度看,攻击者不仅可以选择操纵最终输出分布,还可以仅在中间推理步骤中引入恶意行为,同时保持最终输出正确。(2)此外,前一类攻击可根据触发位置进一步分为两个子类:后门触发器既可以隐藏在用户查询中,也可以出现在外部环境返回的中间观察结果里。我们在包括网络购物和工具使用在内的两个典型智能体任务上实现了上述各类智能体后门攻击。大量实验表明,基于LLM的智能体极易受到后门攻击的影响,且现有的文本后门防御算法难以有效缓解此类后门漏洞。这表明迫切需要进一步研究开发针对基于LLM的智能体的后门攻击的专项防御机制。警告:本文可能包含偏见内容。