Ensuring safety alignment has become a critical requirement for large language models (LLMs), particularly given their widespread deployment in real-world applications. However, LLMs remain susceptible to jailbreak attacks, which exploit system vulnerabilities to bypass safety measures and generate harmful outputs. Although numerous defense mechanisms based on adversarial training have been proposed, a persistent challenge lies in the exacerbation of over-refusal behaviors, which compromise the overall utility of the model. To address these challenges, we propose a Latent-space Adversarial Training with Post-aware Calibration (LATPC) framework. During the adversarial training phase, LATPC compares harmful and harmless instructions in the latent space and extracts safety-critical dimensions to construct refusal features attack, precisely simulating agnostic jailbreak attack types requiring adversarial mitigation. At the inference stage, an embedding-level calibration mechanism is employed to alleviate over-refusal behaviors with minimal computational overhead. Experimental results demonstrate that, compared to various defense methods across five types of jailbreak attacks, LATPC framework achieves a superior balance between safety and utility. Moreover, our analysis underscores the effectiveness of extracting safety-critical dimensions from the latent space for constructing robust refusal feature attacks.
翻译:确保安全对齐已成为大语言模型(LLMs)的关键要求,尤其是在其广泛应用于现实世界场景的背景下。然而,LLMs 仍易受越狱攻击的影响,此类攻击利用系统漏洞绕过安全措施并生成有害输出。尽管已有许多基于对抗训练的防御机制被提出,但一个持续存在的挑战在于过度拒绝行为的加剧,这会损害模型的整体实用性。为应对这些挑战,我们提出了潜在空间对抗训练与后感知校准(LATPC)框架。在对抗训练阶段,LATPC 在潜在空间中比较有害与无害指令,提取安全关键维度以构建拒绝特征攻击,从而精确模拟需要对抗性缓解的未知越狱攻击类型。在推理阶段,采用嵌入级校准机制以最小计算开销缓解过度拒绝行为。实验结果表明,与针对五类越狱攻击的各种防御方法相比,LATPC 框架在安全性与实用性之间实现了更优的平衡。此外,我们的分析强调了从潜在空间提取安全关键维度对于构建鲁棒拒绝特征攻击的有效性。