Differentiating between generated and human-written content is important for navigating the modern world. Large language models (LLMs) are crucial drivers behind the increased quality of computer-generated content. Reportedly, humans find it increasingly difficult to identify whether an AI model generated a piece of text. Our work tests how two important factors contribute to the human vs AI race: empathy and an incentive to appear human. We address both aspects in two experiments: human participants and a state-of-the-art LLM wrote relationship advice (Study 1, n=530) or mere descriptions (Study 2, n=610), either instructed to be as human as possible or not. New samples of humans (n=428 and n=408) then judged the texts' source. Our findings show that when empathy is required, humans excel. Contrary to expectations, instructions to appear human were only effective for the LLM, so the human advantage diminished. Computational text analysis revealed that LLMs become more human because they may have an implicit representation of what makes a text human and effortlessly apply these heuristics. The model resorts to a conversational, self-referential, informal tone with a simpler vocabulary to mimic stochastic empathy. We discuss these findings in light of recent claims on the on-par performance of LLMs.
翻译:区分生成内容与人类撰写内容对于驾驭现代社会至关重要。大型语言模型(LLMs)是推动计算机生成内容质量提升的关键驱动力。据报道,人类越来越难以判断一段文本是否由AI模型生成。本研究检验了两个重要因素如何影响人机辨识竞赛:共情能力及展现人类特质的动机。我们通过两项实验探讨这两个方面:人类参与者与先进LLMs分别撰写关系建议(研究1,n=530)或纯描述性文本(研究2,n=610),其中半数受试者被要求尽可能模仿人类表达。随后新招募的人类样本(n=428与n=408)对文本来源进行判断。研究发现:当需要共情表达时,人类表现卓越。与预期相反,"模仿人类"的指令仅对LLMs有效,导致人类优势减弱。计算文本分析表明,LLMs通过内隐的文本人性化表征机制,能够轻松运用启发式策略提升拟人程度。模型采用会话式、自我指涉、非正式的语体风格及简化词汇来模拟随机共情。我们结合近期关于LLMs性能可比拟人类的论断,对这些发现进行了深入讨论。