Watermarking for large language models (LLMs) has emerged as an effective tool for distinguishing AI-generated text from human-written content. Statistically, watermark schemes induce dependence between generated tokens and a pseudo-random sequence, reducing watermark detection to a hypothesis testing problem on independence. We develop a unified framework for LLM watermark detection based on e-processes, providing anytime-valid guarantees for online testing. We propose various methods to construct empirically adaptive e-processes that can enhance the detection power. The proposed methods are applicable to any sequential testing problem where independent pivotal statistics are available. In addition, theoretical results are established to characterize the power properties of the proposed procedures. Some experiments demonstrate that the proposed framework achieves competitive performance compared to existing watermark detection methods.
翻译:大型语言模型(LLM)水印技术已成为区分AI生成文本与人类撰写内容的有效工具。从统计学角度看,水印方案会在生成标记与伪随机序列之间建立依赖关系,从而将水印检测转化为独立性假设检验问题。本文基于e过程构建了统一的LLM水印检测框架,为在线测试提供任意时间有效的统计保证。我们提出了多种构建经验自适应e过程的方法,以提升检测效能。所提方法适用于任何可获得独立枢轴统计量的序列检验问题。此外,本文建立了理论结果以刻画所提方法的功效特性。实验表明,相较于现有水印检测方法,该框架实现了具有竞争力的性能表现。