Safety alignment mechanisms in Large Language Models (LLMs) often operate as latent internal states, obscuring the model's inherent capabilities. Building on this observation, we model the safety mechanism as an unobserved confounder from a causal perspective. Then, we propose the Causal Front-Door Adjustment Attack (CFA{$^2$}) to jailbreak LLM, which is a framework that leverages Pearl's Front-Door Criterion to sever the confounding associations for robust jailbreaking. Specifically, we employ Sparse Autoencoders (SAEs) to physically strip defense-related features, isolating the core task intent. We further reduce computationally expensive marginalization to a deterministic intervention with low inference complexity. Experiments demonstrate that CFA{$^2$} achieves state-of-the-art attack success rates while offering a mechanistic interpretation of the jailbreaking process.
翻译:大语言模型(LLMs)中的安全对齐机制通常作为潜在的内部状态运行,这掩盖了模型固有的能力。基于这一观察,我们从因果视角将安全机制建模为一个未观测的混杂因子。随后,我们提出了因果前门调整攻击(CFA{$^2$})来对LLM进行越狱,该框架利用Pearl的前门准则切断混杂关联,从而实现鲁棒的越狱。具体而言,我们采用稀疏自编码器(SAEs)从物理上剥离与防御相关的特征,从而隔离核心任务意图。我们进一步将计算代价高昂的边际化过程简化为一种具有低推理复杂度的确定性干预。实验表明,CFA{$^2$}在实现最先进的攻击成功率的同时,还为越狱过程提供了机制性解释。