Watermarking for large language models (LLMs) has emerged as an effective tool for distinguishing AI-generated text from human-written content. Statistically, watermark schemes induce dependence between generated tokens and a pseudo-random sequence, reducing watermark detection to a hypothesis testing problem on independence. We develop a unified framework for LLM watermark detection based on e-processes, providing anytime-valid guarantees for online testing. We propose various methods to construct empirically adaptive e-processes that can enhance the detection power. The proposed methods are applicable to any sequential testing problem where independent pivotal statistics are available. In addition, theoretical results are established to characterize the power properties of the proposed procedures. Some experiments demonstrate that the proposed framework achieves competitive performance compared to existing watermark detection methods.
翻译:大语言模型(LLM)水印已成为区分AI生成文本与人类撰写内容的有效工具。从统计学角度看,水印方案在生成标记与伪随机序列之间引入依赖性,从而将水印检测转化为关于独立性的假设检验问题。我们开发了一个基于e过程的统一LLM水印检测框架,为在线测试提供随时有效的统计保证。我们提出了多种构建经验自适应e过程的方法,这些方法能够提升检测效能。所提出的方法适用于任何可获得独立枢轴统计量的序列检验问题。此外,我们建立了理论结果以刻画所提方法的功效特性。部分实验表明,与现有水印检测方法相比,所提出的框架实现了具有竞争力的性能。