The Large Language Model (LLM) watermark is a newly emerging technique that shows promise in addressing concerns surrounding LLM copyright, monitoring AI-generated text, and preventing its misuse. The LLM watermark scheme commonly includes generating secret keys to partition the vocabulary into green and red lists, applying a perturbation to the logits of tokens in the green list to increase their sampling likelihood, thus facilitating watermark detection to identify AI-generated text if the proportion of green tokens exceeds a threshold. However, recent research indicates that watermarking methods using numerous keys are susceptible to removal attacks, such as token editing, synonym substitution, and paraphrasing, with robustness declining as the number of keys increases. Therefore, the state-of-the-art watermark schemes that employ fewer or single keys have been demonstrated to be more robust against text editing and paraphrasing. In this paper, we propose a novel green list stealing attack against the state-of-the-art LLM watermark scheme and systematically examine its vulnerability to this attack. We formalize the attack as a mixed integer programming problem with constraints. We evaluate our attack under a comprehensive threat model, including an extreme scenario where the attacker has no prior knowledge, lacks access to the watermark detector API, and possesses no information about the LLM's parameter settings or watermark injection/detection scheme. Extensive experiments on LLMs, such as OPT and LLaMA, demonstrate that our attack can successfully steal the green list and remove the watermark across all settings.
翻译:大型语言模型(LLM)水印是一种新兴技术,在应对LLM版权保护、监测AI生成文本及防止其滥用方面展现出潜力。典型的LLM水印方案通过生成密钥将词汇表划分为绿色列表和红色列表,并对绿色列表中词元的逻辑值施加扰动以提高其采样概率,从而当绿色词元比例超过阈值时可实现水印检测以识别AI生成文本。然而,最新研究表明,使用多重密钥的水印方法易受词元编辑、同义词替换及文本复述等移除攻击的影响,且其鲁棒性随密钥数量增加而下降。因此,采用少量或单一密钥的先进水印方案已被证明对文本编辑和复述具有更强的鲁棒性。本文针对当前先进的LLM水印方案提出一种新型绿色列表窃取攻击,并系统性地探究其在此类攻击下的脆弱性。我们将该攻击形式化为带约束条件的混合整数规划问题,在包含极端场景的完整威胁模型下进行评估——攻击者既无先验知识,也无法访问水印检测器API,且对LLM参数设置及水印注入/检测方案一无所知。在OPT和LLaMA等大型语言模型上的大量实验表明,我们的攻击能够在所有设置下成功窃取绿色列表并移除水印。