Jailbreak attacks on Language Model Models (LLMs) entail crafting prompts aimed at exploiting the models to generate malicious content. Existing jailbreak attacks can successfully deceive the LLMs, however they cannot deceive the human. This paper proposes a new type of jailbreak attacks which can deceive both the LLMs and human (i.e., security analyst). The key insight of our idea is borrowed from the social psychology - that is human are easily deceived if the lie is hidden in truth. Based on this insight, we proposed the logic-chain injection attacks to inject malicious intention into benign truth. Logic-chain injection attack firstly dissembles its malicious target into a chain of benign narrations, and then distribute narrations into a related benign article, with undoubted facts. In this way, newly generate prompt cannot only deceive the LLMs, but also deceive human.
翻译:对语言模型(LLMs)的越狱攻击涉及精心设计提示词,以利用模型生成恶意内容。现有越狱攻击虽能成功欺骗LLMs,但无法欺骗人类。本文提出一种新型越狱攻击,可同时欺骗LLMs与人类(即安全分析师)。该方案的核心思想借鉴社会心理学原理——若谎言藏于真相之中,人类极易受骗。基于此洞见,我们提出逻辑链注入攻击,将恶意意图注入善意事实。逻辑链注入攻击首先将恶意目标拆解为一系列善意叙述链,随后将这些叙述分散植入相关善意文章中,并辅以无可置疑的事实依据。通过这种方式,新生成的提示词不仅能欺骗LLMs,亦能欺骗人类。