Potential harms of large language models can be mitigated by watermarking model output, i.e., embedding signals into generated text that are invisible to humans but algorithmically detectable from a short span of tokens. We propose a watermarking framework for proprietary language models. The watermark can be embedded with negligible impact on text quality, and can be detected using an efficient open-source algorithm without access to the language model API or parameters. The watermark works by selecting a randomized set of "green" tokens before a word is generated, and then softly promoting use of green tokens during sampling. We propose a statistical test for detecting the watermark with interpretable p-values, and derive an information-theoretic framework for analyzing the sensitivity of the watermark. We test the watermark using a multi-billion parameter model from the Open Pretrained Transformer (OPT) family, and discuss robustness and security.
翻译:大语言模型可能带来的危害可通过给模型输出添加水印来缓解,即在生成的文本中嵌入人类不可见但可通过短令牌序列进行算法检测的信号。我们提出了一种专有语言模型的水印框架。该水印可在对文本质量影响可忽略的情况下嵌入,并通过高效的开源算法检测,无需访问语言模型API或参数。水印的原理是在生成单词前随机选择一组“绿色”令牌,并在采样过程中温和地促进绿色令牌的使用。我们提出了一个具有可解释p值的统计检验方法来检测水印,并推导了一个信息论框架用于分析水印的敏感性。我们使用来自开放预训练变换器(OPT)家族的数十亿参数模型对水印进行了测试,并讨论了其鲁棒性与安全性。