LLM-based assistants have been widely popularised after the release of ChatGPT. Concerns have been raised about their misuse in academia, given the difficulty of distinguishing between human-written and generated text. To combat this, automated techniques have been developed and shown to be effective, to some extent. However, prior work suggests that these methods often falsely flag essays from non-native speakers as generated, due to their low perplexity extracted from an LLM, which is supposedly a key feature of the detectors. We revisit these statements two years later, specifically in the Czech language setting. We show that the perplexity of texts from non-native speakers of Czech is not lower than that of native speakers. We further examine detectors from three separate families and find no systematic bias against non-native speakers. Finally, we demonstrate that contemporary detectors operate effectively without relying on perplexity.
翻译:随着ChatGPT的发布,基于LLM的辅助工具已得到广泛普及。鉴于区分人类撰写文本与生成文本的困难,人们对其在学术领域的滥用提出了担忧。为应对此问题,自动化检测技术已被开发出来,并在一定程度上被证明是有效。然而,先前研究表明,这些方法常因非母语者文章从LLM中提取的低困惑度而将其误判为生成文本,而困惑度被认为是检测器的关键特征。两年后,我们以捷克语为具体场景重新审视这些论断。我们证明,捷克语非母语者文本的困惑度并不低于母语者。我们进一步检验了来自三个不同家族的检测器,发现其并不存在针对非母语者的系统性偏见。最后,我们论证了当代检测器无需依赖困惑度即可有效运行。