The current safeguard mechanisms for large language models (LLMs) are indeed susceptible to jailbreak attacks, making them inherently fragile. Even the process of fine-tuning on apparently benign data for downstream tasks can jeopardize safety. One potential solution is to conduct safety fine-tuning subsequent to downstream fine-tuning. However, there's a risk of catastrophic forgetting during safety fine-tuning, where LLMs may regain safety measures but lose the task-specific knowledge acquired during downstream fine-tuning. In this paper, we introduce a safety realignment framework through subspace-oriented model fusion (SOMF), aiming to combine the safeguard capabilities of initially aligned model and the current fine-tuned model into a realigned model. Our approach begins by disentangling all task vectors from the weights of each fine-tuned model. We then identify safety-related regions within these vectors by subspace masking techniques. Finally, we explore the fusion of the initial safely aligned LLM with all task vectors based on the identified safety subspace. We validate that our safety realignment framework satisfies the safety requirements of a single fine-tuned model as well as multiple models during their fusion. Our findings confirm that SOMF preserves safety without notably compromising performance on downstream tasks, including instruction following in Chinese, English, and Hindi, as well as problem-solving capabilities in Code and Math.
翻译:当前大型语言模型(LLM)的安全防护机制确实易受越狱攻击影响,本质上存在脆弱性。即便针对看似无害的下游任务数据进行微调,也可能危及安全性。一种潜在解决方案是在下游微调后进行安全微调。然而,安全微调过程中存在灾难性遗忘的风险——LLM虽能恢复安全措施,但会丧失下游微调期间习得的任务特定知识。本文提出一种通过面向子空间模型融合(SOMF)的安全重对齐框架,旨在将初始对齐模型和当前微调模型的安全防护能力整合到统一的重对齐模型中。我们的方法首先从每个微调模型的权重中解耦所有任务向量,随后通过子空间掩码技术识别这些向量中的安全相关区域,最后基于已识别的安全子空间探索初始安全对齐LLM与所有任务向量的融合。我们验证了该安全重对齐框架既能满足单一微调模型的安全需求,也能支持多模型融合过程中的安全要求。研究结果表明,SOMF能在不显著影响下游任务性能的前提下保障安全性,这些任务涵盖中文、英文、印地语的指令遵循能力,以及代码与数学的问题求解能力。