The recent explosion of high-quality language models has necessitated new methods for identifying AI-generated text. Watermarking is a leading solution and could prove to be an essential tool in the age of generative AI. Existing approaches embed watermarks at inference and crucially rely on the large language model (LLM) specification and parameters being secret, which makes them inapplicable to the open-source setting. In this work, we introduce the first watermarking scheme for open-source LLMs. Our scheme works by modifying the parameters of the model, but the watermark can be detected from just the outputs of the model. Perhaps surprisingly, we prove that our watermarks are unremovable under certain assumptions about the adversary's knowledge. To demonstrate the behavior of our construction under concrete parameter instantiations, we present experimental results with OPT-6.7B and OPT-1.3B. We demonstrate robustness to both token substitution and perturbation of the model parameters. We find that the stronger of these attacks, the model-perturbation attack, requires deteriorating the quality score to 0 out of 100 in order to bring the detection rate down to 50%.
翻译:近期高质量语言模型的爆发式增长催生了识别AI生成文本的新方法需求。水印技术作为一种主流解决方案,有望成为生成式AI时代的重要工具。现有方法在推理阶段嵌入水印,其关键依赖于大型语言模型(LLM)的规格参数保密,这使得它们无法适用于开源场景。本研究首次提出适用于开源LLM的水印方案。该方案通过修改模型参数实现水印嵌入,但仅需模型输出即可检测水印。令人惊讶的是,我们证明在特定对抗方知识假设下,该水印具有不可移除性。为展示具体参数实例下的方案性能,我们使用OPT-6.7B和OPT-1.3B模型进行实验验证。实验证明该方案对词汇替换和模型参数扰动均具有鲁棒性。研究发现,在更强的模型参数扰动攻击下,需要将质量评分降至0/100才能将检测率降低至50%。