The dominant paradigm in generative modeling consists of two steps: i) pre-training on a large-scale but unsafe dataset, ii) aligning the pre-trained model with human values via fine-tuning. This practice is considered safe, as no current method can recover the unsafe, pre-fine-tuning model weights. In this paper, we demonstrate that this assumption is often false. Concretely, we present Spectral DeTuning, a method that can recover the weights of the pre-fine-tuning model using a few low-rank (LoRA) fine-tuned models. In contrast to previous attacks that attempt to recover pre-fine-tuning capabilities, our method aims to recover the exact pre-fine-tuning weights. Our approach exploits this new vulnerability against large-scale models such as a personalized Stable Diffusion and an aligned Mistral.
翻译:生成式建模的主流范式包括两个步骤:i) 在大规模但存在安全风险的数据集上预训练,ii) 通过微调使预训练模型与人类价值观对齐。这种实践被认为安全,因为目前尚无方法能恢复不安全的微调前模型权重。在本文中,我们证明这一假设往往不成立。具体而言,我们提出了频谱解调方法(Spectral DeTuning),该方法能通过少量低秩(LoRA)微调模型恢复微调前模型权重。与以往试图恢复微调前能力的攻击不同,我们的方法旨在精确恢复微调前权重。本研究针对定制化Stable Diffusion和对齐Mistral等大规模模型,首次揭示了这一新型漏洞。