The detection of machine-generated text, especially from large language models (LLMs), is crucial in preventing serious social problems resulting from their misuse. Some methods train dedicated detectors on specific datasets but fall short in generalizing to unseen test data, while other zero-shot ones often yield suboptimal performance. Although the recent DetectGPT has shown promising detection performance, it suffers from significant inefficiency issues, as detecting a single candidate requires querying the source LLM with hundreds of its perturbations. This paper aims to bridge this gap. Concretely, we propose to incorporate a Bayesian surrogate model, which allows us to select typical samples based on Bayesian uncertainty and interpolate scores from typical samples to other samples, to improve query efficiency. Empirical results demonstrate that our method significantly outperforms existing approaches under a low query budget. Notably, when detecting the text generated by LLaMA family models, our method with just 2 or 3 queries can outperform DetectGPT with 200 queries.
翻译:机器生成文本(尤其是大语言模型生成文本)的检测对于防止其滥用引发的严重社会问题至关重要。现有方法中,部分通过在特定数据集上训练专用检测器实现,但难以泛化至未见测试数据;而另一些零样本方法则往往性能欠佳。尽管近期提出的DetectGPT展现出良好的检测性能,但其存在显著的效率问题——检测单个候选文本需对源大语言模型进行数百次扰动查询。本文旨在弥合这一差距。具体而言,我们提出引入贝叶斯代理模型,通过基于贝叶斯不确定性选择典型样本,并将典型样本的分数插值至其他样本,从而提升查询效率。实证结果表明,在低查询预算条件下,我们的方法显著优于现有方法。值得注意的是,在检测LLaMA系列模型生成的文本时,仅需2至3次查询的我们的方法即可超越需200次查询的DetectGPT。