Large language models (LLMs) have been widely deployed as the backbone with additional tools and text information for real-world applications. However, integrating external information into LLM-integrated applications raises significant security concerns. Among these, prompt injection attacks are particularly threatening, where malicious instructions injected in the external text information can exploit LLMs to generate answers as the attackers desire. While both training-time and test-time defense methods have been developed to mitigate such attacks, the unaffordable training costs associated with training-time methods and the limited effectiveness of existing test-time methods make them impractical. This paper introduces a novel test-time defense strategy, named Formatting AuThentication with Hash-based tags (FATH). Unlike existing approaches that prevent LLMs from answering additional instructions in external text, our method implements an authentication system, requiring LLMs to answer all received instructions with a security policy and selectively filter out responses to user instructions as the final output. To achieve this, we utilize hash-based authentication tags to label each response, facilitating accurate identification of responses according to the user's instructions and improving the robustness against adaptive attacks. Comprehensive experiments demonstrate that our defense method can effectively defend against indirect prompt injection attacks, achieving state-of-the-art performance under Llama3 and GPT3.5 models across various attack methods. Our code is released at: https://github.com/Jayfeather1024/FATH
翻译:大型语言模型(LLM)作为核心组件,结合附加工具与文本信息,已在现实应用中广泛部署。然而,将外部信息整合至LLM集成应用引发了严重的安全隐患。其中,提示注入攻击尤为危险:通过外部文本信息注入的恶意指令可操纵LLM生成符合攻击者预期的回答。尽管已开发出训练时与测试时防御方法以缓解此类攻击,但训练时方法所需的高昂训练成本与现有测试时方法的有限防御效果,使其均难以实际应用。本文提出一种名为“基于哈希标签的格式认证”(FATH)的新型测试时防御策略。与现有方法(旨在阻止LLM响应外部文本中的附加指令)不同,本方法构建了一套认证系统,要求LLM依据安全策略响应所有接收到的指令,并选择性筛选出对用户指令的响应作为最终输出。为实现这一目标,我们采用基于哈希的认证标签对每个响应进行标记,从而精准识别符合用户指令的响应,并提升对自适应攻击的鲁棒性。综合实验表明,我们的防御方法能有效抵御间接提示注入攻击,在Llama3与GPT3.5模型上针对多种攻击方法均实现了最先进的防御性能。代码已发布于:https://github.com/Jayfeather1024/FATH