We consider the problem of distinguishing human-written creative fiction (excerpts from novels) from similar text generated by an LLM. Our results show that, while human observers perform poorly (near chance levels) on this binary classification task, a variety of machine-learning models achieve accuracy in the range 0.93 - 0.98 over a previously unseen test set, even using only short samples and single-token (unigram) features. We therefore employ an inherently interpretable (linear) classifier (with a test accuracy of 0.98), in order to elucidate the underlying reasons for this high accuracy. In our analysis, we identify specific unigram features indicative of LLM-generated text, one of the most important being that the LLM tends to use a larger variety of synonyms, thereby skewing the probability distributions in a manner that is easy to detect for a machine learning classifier, yet very difficult for a human observer. Four additional explanation categories were also identified, namely, temporal drift, Americanisms, foreign language usage, and colloquialisms. As identification of the AI-generated text depends on a constellation of such features, the classification appears robust, and therefore not easy to circumvent by malicious actors intent on misrepresenting AI-generated text as human work.
翻译:本文研究如何区分人类创作的创意小说(小说节选)与大型语言模型生成的类似文本。实验结果表明,尽管人类观察者在此二元分类任务中表现不佳(接近随机水平),但多种机器学习模型在未见测试集上达到了0.93-0.98的准确率,即使仅使用短文本样本和单标记(一元语法)特征。为此,我们采用具有内在可解释性的线性分类器(测试准确率0.98)来阐明实现高准确率的根本原因。通过分析,我们识别出指示LLM生成文本的特定一元语法特征,其中最重要的发现是:LLM倾向于使用更丰富的同义词,从而导致概率分布发生偏移——这种偏移易于被机器学习分类器检测,却极难被人类观察者察觉。研究还确定了另外四类解释性特征:时间漂移、美式用语、外语使用以及口语化表达。由于AI生成文本的识别依赖于这些特征的组合,该分类方法具有鲁棒性,因此不易被意图将AI生成文本伪装成人类作品的恶意行为者规避。