The rapid advancement of large language models (LLMs) has introduced new challenges in distinguishing human-written text from AI-generated content. In this work, we explored a pipelined approach for AI-generated text detection that includes a feature extraction step (i.e. prompt-based rewriting features inspired by RAIDAR and content-based features derived from the NELA toolkit) followed by a classification module. Comprehensive experiments were conducted on the Defactify4.0 dataset, evaluating two tasks: binary classification to differentiate human-written and AI-generated text, and multi-class classification to identify the specific generative model used to generate the input text. Our findings reveal that NELA features significantly outperform RAIDAR features in both tasks, demonstrating their ability to capture nuanced linguistic, stylistic, and content-based differences. Combining RAIDAR and NELA features provided minimal improvement, highlighting the redundancy introduced by less discriminative features. Among the classifiers tested, XGBoost emerged as the most effective, leveraging the rich feature sets to achieve high accuracy and generalisation.
翻译:大型语言模型(LLM)的快速发展为区分人类书写文本与AI生成内容带来了新的挑战。本研究探索了一种用于AI生成文本检测的流水线方法,该方法包括特征提取步骤(即受RAIDAR启发的基于提示重写的特征,以及源自NELA工具包的基于内容的特征)和后续的分类模块。我们在Defactify4.0数据集上进行了全面的实验,评估了两个任务:一是区分人类书写文本与AI生成文本的二分类任务,二是识别用于生成输入文本的特定生成模型的多分类任务。我们的研究结果表明,NELA特征在两个任务上均显著优于RAIDAR特征,证明了其捕捉细微的语言、风格及内容差异的能力。结合RAIDAR与NELA特征仅带来微小的性能提升,这凸显了区分度较低的特征所引入的冗余性。在测试的分类器中,XGBoost表现最为有效,它充分利用了丰富的特征集,实现了高准确率和良好的泛化能力。