Large Language Models (LLMs) present a dual-use dilemma: they enable beneficial applications while harboring potential for harm, particularly through conversational interactions. Despite various safeguards, advanced LLMs remain vulnerable. A watershed case in early 2023 involved journalist Kevin Roose's extended dialogue with Bing, an LLM-powered search engine, which revealed harmful outputs after probing questions, highlighting vulnerabilities in the model's safeguards. This contrasts with simpler early jailbreaks, like the "Grandma Jailbreak," where users framed requests as innocent help for a grandmother, easily eliciting similar content. This raises the question: How much conversational effort is needed to elicit harmful information from LLMs? We propose two measures to quantify this effort: Conversational Length (CL), which measures the number of conversational turns needed to obtain a specific harmful response, and Conversational Complexity (CC), defined as the Kolmogorov complexity of the user's instruction sequence leading to the harmful response. To address the incomputability of Kolmogorov complexity, we approximate CC using a reference LLM to estimate the compressibility of the user instructions. Applying this approach to a large red-teaming dataset, we perform a quantitative analysis examining the statistical distribution of harmful and harmless conversational lengths and complexities. Our empirical findings suggest that this distributional analysis and the minimization of CC serve as valuable tools for understanding AI safety, offering insights into the accessibility of harmful information. This work establishes a foundation for a new perspective on LLM safety, centered around the algorithmic complexity of pathways to harm.
翻译:大型语言模型(LLMs)呈现出双重用途困境:它们既能实现有益应用,又潜藏着通过对话交互造成危害的可能性。尽管存在多种防护机制,先进的大型语言模型依然存在脆弱性。2023年初的一个标志性案例涉及记者凯文·鲁斯与必应(一款基于LLM的搜索引擎)的长时间对话,该对话在试探性提问后暴露出有害输出,突显了模型防护机制的漏洞。这与早期简单的"越狱"方法(如"祖母越狱")形成对比——用户将请求伪装成帮助祖母的无害需求,便能轻易获取类似内容。这引出一个核心问题:需要付出多少对话努力才能从大型语言模型中获取有害信息?我们提出两个量化指标:对话长度(CL)——衡量获得特定有害回复所需的对话轮次数,以及对话复杂度(CC)——定义为导致有害回复的用户指令序列的柯尔莫哥洛夫复杂度。针对柯尔莫哥洛夫复杂度的不可计算性,我们采用参考LLM估计用户指令的可压缩性来近似计算CC。将此方法应用于大规模红队测试数据集,我们通过量化分析考察了有害与无害对话长度及复杂度的统计分布。实证研究表明,这种分布分析及CC最小化方法可作为理解AI安全性的有效工具,为评估有害信息的可获取性提供新视角。本研究为建立以"危害路径算法复杂度"为核心的大型语言模型安全新范式奠定了基础。