Text watermarking has emerged as a pivotal technique for identifying machine-generated text. However, existing methods often rely on arbitrary vocabulary partitioning during decoding to embed watermarks, which compromises the availability of suitable tokens and significantly degrades the quality of responses. This study assesses the impact of watermarking on different capabilities of large language models (LLMs) from a cognitive science lens. Our finding highlights a significant disparity; knowledge recall and logical reasoning are more adversely affected than language generation. These results suggest a more profound effect of watermarking on LLMs than previously understood. To address these challenges, we introduce Watermarking with Mutual Exclusion (WatME), a novel approach leveraging linguistic prior knowledge of inherent lexical redundancy in LLM vocabularies to seamlessly integrate watermarks. Specifically, WatME dynamically optimizes token usage during the decoding process by applying a mutually exclusive rule to the identified lexical redundancies. This strategy effectively prevents the unavailability of appropriate tokens and preserves the expressive power of LLMs. We provide both theoretical analysis and empirical evidence showing that WatME effectively preserves the diverse capabilities of LLMs while ensuring watermark detectability.
翻译:文本水印技术已成为识别机器生成文本的关键手段。然而,现有方法通常在解码过程中依赖任意的词汇表划分来嵌入水印,这会限制合适词汇的可用性,并显著降低生成文本的质量。本研究从认知科学视角评估了水印技术对大型语言模型(LLMs)不同能力的影响。我们的发现揭示了一个显著差异:与语言生成能力相比,知识回忆与逻辑推理能力受到水印的负面影响更为严重。这些结果表明水印对LLMs的影响比以往认知更为深远。为应对这些挑战,我们提出了互斥水印(WatME),这是一种利用LLM词汇库固有词汇冗余的语言先验知识来无缝集成水印的新方法。具体而言,WatME在解码过程中通过对已识别的词汇冗余应用互斥规则,动态优化词汇使用策略。该方法有效避免了合适词汇不可用的问题,并保持了LLMs的表达能力。我们通过理论分析和实证证据表明,WatME在确保水印可检测性的同时,能有效保持LLMs的多样化能力。