Harmful fine-tuning attack introduces significant security risks to the fine-tuning services. Main-stream defenses aim to vaccinate the model such that the later harmful fine-tuning attack is less effective. However, our evaluation results show that such defenses are fragile--with a few fine-tuning steps, the model still can learn the harmful knowledge. To this end, we do further experiment and find that an embarrassingly simple solution--adding purely random perturbations to the fine-tuned model, can recover the model from harmful behaviors, though it leads to a degradation in the model's fine-tuning performance. To address the degradation of fine-tuning performance, we further propose Panacea, which optimizes an adaptive perturbation that will be applied to the model after fine-tuning. Panacea maintains model's safety alignment performance without compromising downstream fine-tuning performance. Comprehensive experiments are conducted on different harmful ratios, fine-tuning tasks and mainstream LLMs, where the average harmful scores are reduced by up-to 21.2%, while maintaining fine-tuning performance. As a by-product, we analyze the adaptive perturbation and show that different layers in various LLMs have distinct safety affinity, which coincide with finding from several previous study. Source code available at https://github.com/w-yibo/Panacea.
翻译:有害微调攻击对微调服务构成了重大安全风险。主流防御方法旨在为模型"接种疫苗",使后续的有害微调攻击效果降低。然而,我们的评估结果表明此类防御具有脆弱性——仅需少量微调步骤,模型仍能习得有害知识。为此,我们进一步实验发现,一种极其简单的解决方案——对微调后模型添加纯随机扰动,可使模型从有害行为中恢复,但这会导致模型微调性能下降。为解决微调性能退化问题,我们进一步提出Panacea方法,该方法通过优化自适应扰动并在微调后施加于模型。Panacea能在不影响下游微调性能的前提下,保持模型的安全对齐性能。我们在不同有害比例、微调任务和主流LLMs上进行了全面实验,结果表明该方法在保持微调性能的同时,平均有害分数最高可降低21.2%。作为附带成果,我们分析了自适应扰动,发现不同LLMs的各网络层具有差异化的安全亲和性,这与多项先前研究的发现相吻合。源代码发布于https://github.com/w-yibo/Panacea。