Watermarking for large language models (LLMs) has emerged as an effective tool for distinguishing AI-generated text from human-written content. Statistically, watermark schemes induce dependence between generated tokens and a pseudo-random sequence, reducing watermark detection to a hypothesis testing problem on independence. We develop a unified framework for LLM watermark detection based on e-processes, providing anytime-valid guarantees for online testing. We propose various methods to construct empirically adaptive e-processes that can enhance the detection power. In addition, theoretical results are established to characterize the power properties of the proposed procedures. Some experiments demonstrate that the proposed framework achieves competitive performance compared to existing watermark detection methods.
翻译:大语言模型水印技术已成为区分AI生成文本与人类撰写内容的有效工具。从统计学角度看,水印方案在生成的词元与伪随机序列之间引入了依赖性,从而将水印检测转化为一个关于独立性的假设检验问题。我们开发了一个基于e过程的统一框架用于大语言模型水印检测,为在线测试提供了任意时间有效的统计保证。我们提出了多种构建经验自适应e过程的方法,这些方法能够提升检测效能。此外,我们建立了理论结果以刻画所提方法的效能特性。部分实验表明,与现有水印检测方法相比,所提框架实现了具有竞争力的性能。