Large language models (LLMs) are susceptible to a type of attack known as jailbreaking, which misleads LLMs to output harmful contents. Although there are diverse jailbreak attack strategies, there is no unified understanding on why some methods succeed and others fail. This paper explores the behavior of harmful and harmless prompts in the LLM's representation space to investigate the intrinsic properties of successful jailbreak attacks. We hypothesize that successful attacks share some similar properties: They are effective in moving the representation of the harmful prompt towards the direction to the harmless prompts. We leverage hidden representations into the objective of existing jailbreak attacks to move the attacks along the acceptance direction, and conduct experiments to validate the above hypothesis using the proposed objective. We hope this study provides new insights into understanding how LLMs understand harmfulness information.
翻译:大语言模型(LLMs)容易受到一种称为“越狱”的攻击,这种攻击会误导LLMs输出有害内容。尽管存在多种越狱攻击策略,但目前对于为何某些方法成功而其他方法失败尚无统一的理解。本文通过探究有害提示与无害提示在LLM表征空间中的行为,来研究成功越狱攻击的内在特性。我们假设成功的攻击具有某些相似特性:它们能有效地将有害提示的表征朝着无害提示的方向移动。我们将隐藏表征引入现有越狱攻击的目标中,使攻击沿着“接受方向”移动,并通过实验使用所提出的目标验证上述假设。我们希望这项研究为理解LLMs如何理解有害信息提供新的见解。