The capabilities of large language models have grown significantly in recent years and so too have concerns about their misuse. It is important to be able to distinguish machine-generated text from human-authored content. Prior works have proposed numerous schemes to watermark text, which would benefit from a systematic evaluation framework. This work focuses on LLM output watermarking techniques - as opposed to image or model watermarks - and proposes Mark My Words, a comprehensive benchmark for them under different natural language tasks. We focus on three main metrics: quality, size (i.e., the number of tokens needed to detect a watermark), and tamper resistance (i.e., the ability to detect a watermark after perturbing marked text). Current watermarking techniques are nearly practical enough for real-world use: Kirchenbauer et al. [33]'s scheme can watermark models like Llama 2 7B-chat or Mistral-7B-Instruct with no perceivable loss in quality on natural language tasks, the watermark can be detected with fewer than 100 tokens, and their scheme offers good tamper resistance to simple perturbations. However, they struggle to efficiently watermark code generations. We publicly release our benchmark (https://github.com/wagner-group/MarkMyWords).
翻译:近年来,大型语言模型的能力显著提升,对其滥用的担忧也随之增加。能够区分机器生成的文本与人类创作的内容至关重要。先前的研究提出了多种文本水印方案,这些方案将受益于一个系统性的评估框架。本研究专注于LLM输出水印技术——与图像或模型水印相对——并提出了Mark My Words,一个针对不同自然语言任务的全面基准测试。我们重点关注三个主要指标:质量、规模(即检测水印所需的令牌数量)以及篡改抵抗性(即在扰动标记文本后检测水印的能力)。当前的水印技术已近乎实用:Kirchenbauer等人[33]的方案能够为Llama 2 7B-chat或Mistral-7B-Instruct等模型添加水印,在自然语言任务上无感知质量损失,水印检测所需令牌少于100个,并且该方案对简单扰动具有良好的篡改抵抗性。然而,它们在高效地为代码生成添加水印方面仍存在困难。我们公开发布了我们的基准测试(https://github.com/wagner-group/MarkMyWords)。