Recent advancements in generative AI have enabled ubiquitous access to large language models (LLMs). Empowered by their exceptional capabilities to understand and generate human-like text, these models are being increasingly integrated into our society. At the same time, there are also concerns on the potential misuse of this powerful technology, prompting defensive measures from service providers. To overcome such protection, jailbreaking prompts have recently emerged as one of the most effective mechanisms to circumvent security restrictions and elicit harmful content originally designed to be prohibited. Due to the rapid development of LLMs and their ease of access via natural languages, the frontline of jailbreak prompts is largely seen in online forums and among hobbyists. To gain a better understanding of the threat landscape of semantically meaningful jailbreak prompts, we systemized existing prompts and measured their jailbreak effectiveness empirically. Further, we conducted a user study involving 92 participants with diverse backgrounds to unveil the process of manually creating jailbreak prompts. We observed that users often succeeded in jailbreak prompts generation regardless of their expertise in LLMs. Building on the insights from the user study, we also developed a system using AI as the assistant to automate the process of jailbreak prompt generation.
翻译:近年来,生成式人工智能的进步使得大型语言模型(LLMs)得以普及应用。凭借其理解和生成类人文本的卓越能力,这些模型正日益融入我们的社会。与此同时,人们也担忧这一强大技术可能被滥用,促使服务提供商采取防御措施。为了突破此类保护,越狱提示最近已成为规避安全限制、诱导生成原本设计为禁止的有害内容的最有效机制之一。由于LLMs的快速发展及其通过自然语言易于访问的特性,越狱提示的前沿阵地主要出现在在线论坛和爱好者群体中。为了更好地理解具有语义意义的越狱提示的威胁态势,我们对现有提示进行了系统化梳理,并通过实证方法测量了其越狱有效性。此外,我们开展了一项涉及92名不同背景参与者的用户研究,以揭示手动创建越狱提示的过程。我们观察到,无论用户是否具备LLMs专业知识,他们往往都能成功生成越狱提示。基于用户研究的洞察,我们还开发了一个利用AI作为助手的系统,以自动化越狱提示的生成过程。