The rapid adoption of Mixture-of-Experts (MoE) architectures marks a major shift in the deployment of Large Language Models (LLMs). MoE LLMs improve scaling efficiency by activating only a small subset of parameters per token, but their routing structure introduces new safety attack surfaces. We find that safety-critical behaviors in MoE LLMs (e.g., refusal) are concentrated in a small set of experts rather than being uniformly distributed. Building on this, we propose Large Language Lobotomy (L$^3$), a training-free, architecture-agnostic attack that compromises safety alignment by exploiting expert routing dynamics. L$^3$ learns routing patterns that correlate with refusal, attributes safety behavior to specific experts, and adaptively silences the most safety-relevant experts until harmful outputs are produced. We evaluate L$^3$ on eight state-of-the-art open-source MoE LLMs and show that our adaptive expert silencing increases average attack success from 7.3% to 70.4%, reaching up to 86.3%, outperforming prior training-free MoE jailbreak methods. Moreover, bypassing guardrails typically requires silencing fewer than 20% of layer-wise experts while largely preserving general language utility. These results reveal a fundamental tension between efficiency-driven MoE design and robust safety alignment and motivate distributing safety mechanisms more robustly in future MoE LLMs with architecture- and routing-aware methods.
翻译:混合专家(MoE)架构的快速采用标志着大型语言模型(LLM)部署方式的重大转变。MoE LLM通过每个token仅激活一小部分参数来提高扩展效率,但其路由结构引入了新的安全攻击面。我们发现,MoE LLM中的安全关键行为(例如拒绝响应)集中在少数专家中,而非均匀分布。基于此,我们提出大型语言模型脑叶切断术(L$^3$),这是一种无需训练、与架构无关的攻击方法,通过利用专家路由动态来破坏安全对齐。L$^3$学习与拒绝行为相关的路由模式,将安全行为归因于特定专家,并自适应地沉默最具安全相关性的专家,直至产生有害输出。我们在八个最先进的开源MoE LLM上评估L$^3$,结果表明我们的自适应专家沉默将平均攻击成功率从7.3%提升至70.4%,最高可达86.3%,优于先前的无需训练MoE越狱方法。此外,绕过防护机制通常只需沉默少于20%的层级专家,同时基本保持通用语言能力。这些结果揭示了效率驱动的MoE设计与鲁棒安全对齐之间的根本矛盾,并促使未来MoE LLM通过架构感知和路由感知方法更鲁棒地分布安全机制。