To support various applications, business owners often seek the customized models that are obtained by fine-tuning a pre-trained LLM through the API provided by LLM owners or cloud servers. However, this process carries a substantial risk of model misuse, potentially resulting in severe economic consequences for business owners. Thus, safeguarding the copyright of these customized models during LLM fine-tuning has become an urgent practical requirement, but there are limited existing solutions to provide such protection. To tackle this pressing issue, we propose a novel watermarking approach named "Double-I watermark". Specifically, based on the instruct-tuning data, two types of backdoor data paradigms are introduced with trigger in the instruction and the input, respectively. By leveraging LLM's learning capability to incorporate customized backdoor samples into the dataset, the proposed approach effectively injects specific watermarking information into the customized model during fine-tuning, which makes it easy to inject and verify watermarks in commercial scenarios. We evaluate the proposed "Double-I watermark" under various fine-tuning methods, demonstrating its harmlessness, robustness, uniqueness, imperceptibility, and validity through both theoretical analysis and experimental verification.
翻译:为支持各类应用,商业用户常通过大语言模型所有者或云服务器提供的应用程序接口,对预训练的大语言模型进行微调以获取定制化模型。然而,这一过程伴随着模型滥用的重大风险,可能给商业用户造成严重的经济后果。因此,在大语言模型微调过程中保障这些定制化模型的版权已成为迫切的现实需求,但现有解决方案对此保护能力有限。为应对这一紧迫问题,我们提出了一种名为"Double-I水印"的新型水印方法。具体而言,基于指令微调数据,我们引入了两种分别以指令和输入中的触发器为特征的后门数据范式。通过利用大语言模型的学习能力将定制化后门样本纳入数据集,所提出的方法在微调过程中有效地将特定水印信息注入定制化模型,从而在商业场景中易于注入和验证水印。我们在多种微调方法下评估了所提出的"Double-I水印",通过理论分析和实验验证证明了其无害性、鲁棒性、独特性、不可感知性和有效性。