Large language models (LLMs) have achieved human-level text generation, emphasizing the need for effective AI-generated text detection to mitigate risks like the spread of fake news and plagiarism. Existing research has been constrained by evaluating detection methods on specific domains or particular language models. In practical scenarios, however, the detector faces texts from various domains or LLMs without knowing their sources. To this end, we build a comprehensive testbed by gathering texts from diverse human writings and texts generated by different LLMs. Empirical results show challenges in distinguishing machine-generated texts from human-authored ones across various scenarios, especially out-of-distribution. These challenges are due to the decreasing linguistic distinctions between the two sources. Despite challenges, the top-performing detector can identify 86.54% out-of-domain texts generated by a new LLM, indicating the feasibility for application scenarios. We release our resources at https://github.com/yafuly/MAGE.
翻译:大语言模型(LLMs)已实现人类级别的文本生成,这凸显了有效检测AI生成文本以降低虚假新闻传播与抄袭等风险的迫切需求。现有研究受限于在特定领域或特定语言模型上评估检测方法。然而在实际场景中,检测器需面对来自不同领域或不同LLM的文本,且无法获知其来源。为此,我们通过收集多样的人类写作文本以及由不同LLM生成的文本,构建了一个全面的测试平台。实证结果表明,在各种场景下区分机器生成文本与人类撰写文本存在挑战,尤其在分布外场景中。这些挑战源于两类文本间语言差异的逐渐缩小。尽管存在挑战,性能最优的检测器仍能识别出86.54%由新LLM生成的域外文本,这表明了其在实际应用场景中的可行性。我们在https://github.com/yafuly/MAGE发布相关资源。