We introduce the first watermark tailored for diffusion language models (DLMs), an emergent LLM paradigm able to generate tokens in arbitrary order, in contrast to standard autoregressive language models (ARLMs) which generate tokens sequentially. While there has been much work in ARLM watermarking, a key challenge when attempting to apply these schemes directly to the DLM setting is that they rely on previously generated tokens, which are not always available with DLM generation. In this work we address this challenge by: (i) applying the watermark in expectation over the context even when some context tokens are yet to be determined, and (ii) promoting tokens which increase the watermark strength when used as context for other tokens. This is accomplished while keeping the watermark detector unchanged. Our experimental evaluation demonstrates that the DLM watermark leads to a >99% true positive rate with minimal quality impact and achieves similar robustness to existing ARLM watermarks, enabling for the first time reliable DLM watermarking.
翻译:我们提出了首个专为扩散语言模型(DLMs)设计的水印方案,这是一种新兴的大语言模型范式,能够以任意顺序生成词元,这与标准自回归语言模型(ARLMs)顺序生成词元的模式形成鲜明对比。尽管自回归语言模型水印已有大量研究,但直接将这些方案应用于扩散语言模型场景面临的核心挑战在于:它们依赖于先前已生成的词元,而扩散语言模型的生成过程中这些词元并非总是可用。本研究通过以下方式应对这一挑战:(一)即使部分上下文词元尚未确定,仍基于上下文期望值施加水印;(二)提升那些在作为其他词元上下文时能增强水印强度的词元生成概率。该方案在保持水印检测器不变的前提下实现上述目标。实验评估表明,该扩散语言模型水印方案在实现>99%真阳性率的同时对生成质量影响极小,且达到与现有自回归语言模型水印相当的鲁棒性,首次实现了可靠的扩散语言模型水印技术。