Generative models, especially large language models (LLMs), have shown remarkable progress in producing text that appears human-like. However, they often exhibit patterns that make their output easier to detect than text written by humans. In this paper, we investigate how explainable AI (XAI) methods can be used to reduce the detectability of AI-generated text (AIGT) while also introducing a robust ensemble-based detection approach. We begin by training an ensemble classifier to distinguish AIGT from human-written text, then apply SHAP and LIME to identify tokens that most strongly influence its predictions. We propose four explainability-based token replacement strategies to modify these influential tokens. Our findings show that these token replacement approaches can significantly diminish a single classifier's ability to detect AIGT. However, our ensemble classifier maintains strong performance across multiple languages and domains, showing that a multi-model approach can mitigate the impact of token-level manipulations. These results show that XAI methods can make AIGT harder to detect by focusing on the most influential tokens. At the same time, they highlight the need for robust, ensemble-based detection strategies that can adapt to evolving approaches for hiding AIGT.
翻译:生成模型,特别是大语言模型(LLM),在生成类人文本方面取得了显著进展。然而,它们常常表现出某些模式,使其输出比人类撰写的文本更容易被检测。本文研究了如何利用可解释人工智能(XAI)方法来降低AI生成文本(AIGT)的可检测性,同时提出了一种鲁棒的基于集成的检测方法。我们首先训练一个集成分类器来区分AIGT与人类撰写的文本,然后应用SHAP和LIME来识别对其预测影响最大的令牌。我们提出了四种基于可解释性的令牌替换策略来修改这些关键令牌。研究结果表明,这些令牌替换方法能显著削弱单一分类器检测AIGT的能力。然而,我们的集成分类器在多种语言和领域中均保持了强大的性能,这表明多模型方法能够有效缓解令牌级操作的影响。这些结果证明,通过聚焦于最具影响力的令牌,XAI方法能够使AIGT更难以被检测。同时,它们也凸显了对鲁棒的、基于集成的检测策略的需求,这种策略需要能够适应不断演进的AIGT隐藏方法。