The promise of LLM watermarking rests on a core assumption that a specific watermark proves authorship by a specific model. We demonstrate that this assumption is dangerously flawed. We introduce the threat of watermark spoofing, a sophisticated attack that allows a malicious model to generate text containing the authentic-looking watermark of a trusted, victim model. This enables the seamless misattribution of harmful content, such as disinformation, to reputable sources. The key to our attack is repurposing watermark radioactivity, the unintended inheritance of data patterns during fine-tuning, from a discoverable trait into an attack vector. By distilling knowledge from a watermarked teacher model, our framework allows an attacker to steal and replicate the watermarking signal of the victim model. This work reveals a critical security gap in text authorship verification and calls for a paradigm shift towards technologies capable of distinguishing authentic watermarks from expertly imitated ones. Our code is available at https://github.com/hsannn/ditto.git.
翻译:大语言模型水印技术的基本前提在于一个核心假设:特定水印能够证明文本由特定模型生成。我们证明这一假设存在严重缺陷。我们提出了水印欺骗这一新型威胁,这是一种复杂的攻击手段,允许恶意模型生成包含可信受害者模型真实外观水印的文本。这使得有害内容(例如虚假信息)能够被无缝错误归因于信誉良好的来源。我们攻击的关键在于重新利用水印放射性——即在微调过程中数据模式的无意继承——将其从一个可发现的特性转变为攻击向量。通过从带水印的教师模型中蒸馏知识,我们的框架使攻击者能够窃取并复制受害者模型的水印信号。这项工作揭示了文本作者身份验证领域的一个关键安全漏洞,并呼吁向能够区分真实水印与专业仿冒水印的技术进行范式转变。我们的代码可在 https://github.com/hsannn/ditto.git 获取。