The new paradigm of finetuning-as-a-service introduces a new attack surface for Large Language Models (LLMs): a few harmful data uploaded by users can easily trick the finetuning to produce an alignment-broken model. We conduct an empirical analysis and uncover a \textit{harmful embedding drift} phenomenon, showing a probable cause of the alignment-broken effect. Inspired by our findings, we propose Vaccine, a perturbation-aware alignment technique to mitigate the security risk of users finetuning. The core idea of Vaccine is to produce invariant hidden embeddings by progressively adding crafted perturbation to them in the alignment phase. This enables the embeddings to withstand harmful perturbation from un-sanitized user data in the finetuning phase. Our results on open source mainstream LLMs (e.g., Llama2, Opt, Vicuna) demonstrate that Vaccine can boost the robustness of alignment against harmful prompts induced embedding drift while reserving reasoning ability towards benign prompts. Our code is available at \url{https://github.com/git-disl/Vaccine}.
翻译:微调即服务的新范式为大语言模型引入了新的攻击面:用户上传的少量有害数据即可轻易诱导微调过程产生对齐失效的模型。我们通过实证分析揭示了\textit{有害嵌入漂移}现象,阐明了导致对齐失效的可能成因。基于该发现,我们提出Vaccine——一种扰动感知对齐技术,旨在缓解用户微调带来的安全风险。Vaccine的核心思想是在对齐阶段通过渐进式添加精心构建的扰动,使隐藏嵌入具备不变性。这使得嵌入能够在微调阶段抵御来自未净化用户数据的有害扰动。我们在主流开源大语言模型上的实验结果表明,Vaccine能有效提升模型对齐对有害提示诱导嵌入漂移的鲁棒性,同时保持对良性提示的推理能力。代码已开源:\url{https://github.com/git-disl/Vaccine}。