Everyday AI detection requires differentiating between people and AI in informal, online conversations. In many cases, people will not interact directly with AI systems but instead read conversations between AI systems and other people. We measured how well people and large language models can discriminate using two modified versions of the Turing test: inverted and displaced. GPT-3.5, GPT-4, and displaced human adjudicators judged whether an agent was human or AI on the basis of a Turing test transcript. We found that both AI and displaced human judges were less accurate than interactive interrogators, with below chance accuracy overall. Moreover, all three judged the best-performing GPT-4 witness to be human more often than human witnesses. This suggests that both humans and current LLMs struggle to distinguish between the two when they are not actively interrogating the person, underscoring an urgent need for more accurate tools to detect AI in conversations.
翻译:日常AI检测需要在非正式的在线对话中区分人类与AI。在许多情况下,人们不会直接与AI系统互动,而是阅读AI系统与他人之间的对话。我们通过两种改进的图灵测试版本——反向测试与间接测试,测量了人类与大型语言模型的判别能力。GPT-3.5、GPT-4及间接人类评审员基于图灵测试对话记录,判断对话主体是人类还是AI。研究发现,AI评审员与间接人类评审员的判断准确率均低于直接交互的提问者,总体准确率低于随机水平。此外,三者均将表现最优的GPT-4对话方判定为人类的频率高于真实人类对话方。这表明当不处于主动提问状态时,人类与当前的大型语言模型都难以区分对话方身份,这凸显了开发更精准对话AI检测工具的迫切需求。