Large language models (LLMs) rely on safety alignment to avoid responding to malicious user inputs. Unfortunately, jailbreak can circumvent safety guardrails, resulting in LLMs generating harmful content and raising concerns about LLM safety. Due to language models with intensive parameters often regarded as black boxes, the mechanisms of alignment and jailbreak are challenging to elucidate. In this paper, we employ weak classifiers to explain LLM safety through the intermediate hidden states. We first confirm that LLMs learn ethical concepts during pre-training rather than alignment and can identify malicious and normal inputs in the early layers. Alignment actually associates the early concepts with emotion guesses in the middle layers and then refines them to the specific reject tokens for safe generations. Jailbreak disturbs the transformation of early unethical classification into negative emotions. We conduct experiments on models from 7B to 70B across various model families to prove our conclusion. Overall, our paper indicates the intrinsical mechanism of LLM safety and how jailbreaks circumvent safety guardrails, offering a new perspective on LLM safety and reducing concerns.
翻译:大型语言模型(LLM)依赖安全对齐机制来避免对恶意用户输入作出响应。然而,越狱攻击能够绕过安全护栏,导致LLM生成有害内容,引发对LLM安全性的担忧。由于参数量庞大的语言模型常被视为黑箱,对齐与越狱的内在机制难以阐明。本文通过弱分类器对中间隐藏状态的分析来解释LLM的安全性机制。我们首先证实LLM在预训练阶段(而非对齐阶段)已习得伦理概念,并能在早期层识别恶意与正常输入。对齐机制实际上将早期概念与中间层的情感推测相关联,进而将其细化为特定的拒绝标记以实现安全生成。越狱攻击则干扰了早期非伦理分类向负面情感的转化过程。我们在7B至70B参数规模、涵盖多种模型家族的模型上进行了实验以验证结论。总体而言,本文揭示了LLM安全性的内在机制以及越狱攻击如何规避安全护栏,为理解LLM安全性提供了新视角,有助于缓解相关担忧。