Fine-tuning large language models (LLMs) on downstream tasks can inadvertently erode their safety alignment, even for benign fine-tuning datasets. We address this challenge by proposing SafeMERGE, a post-fine-tuning framework that preserves safety while maintaining task utility. It achieves this by selectively merging fine-tuned and safety-aligned model layers only when those deviate from safe behavior, measured by a cosine similarity criterion. We evaluate SafeMERGE against other fine-tuning- and post-fine-tuning-stage approaches for Llama-2-7B-Chat and Qwen-2-7B-Instruct models on GSM8K and PubMedQA tasks while exploring different merging strategies. We find that SafeMERGE consistently reduces harmful outputs compared to other baselines without significantly sacrificing performance, sometimes even enhancing it. The results suggest that our selective, subspace-guided, and per-layer merging method provides an effective safeguard against the inadvertent loss of safety in fine-tuned LLMs while outperforming simpler post-fine-tuning-stage defenses.
翻译:在下游任务上微调大语言模型(LLMs)可能会无意中削弱其安全对齐性,即使对于良性的微调数据集也是如此。我们通过提出SafeMERGE来解决这一挑战,这是一个在微调后保持安全性同时维持任务效用的框架。它通过选择性融合微调模型层与安全对齐模型层来实现这一目标,仅当这些层偏离安全行为时(通过余弦相似度准则衡量)才进行融合。我们在GSM8K和PubMedQA任务上,针对Llama-2-7B-Chat和Qwen-2-7B-Instruct模型,将SafeMERGE与其他微调阶段及微调后阶段的方法进行比较,并探索了不同的融合策略。我们发现,与其他基线方法相比,SafeMERGE在未显著牺牲性能(有时甚至能提升性能)的情况下,持续减少了有害输出。结果表明,我们这种选择性的、子空间引导的、逐层融合的方法,为微调后LLMs中安全性的无意丧失提供了有效的防护,并且优于更简单的微调后阶段防御方法。