Large Language Models (LLMs) have demonstrated an alarming ability to impersonate humans in conversation, raising concerns about their potential misuse in scams and deception. Humans have a right to know if they are conversing to an LLM. We evaluate text-based prompts designed as challenges to expose LLM imposters in real-time. To this end we compile and release an open-source benchmark dataset that includes 'implicit challenges' that exploit an LLM's instruction-following mechanism to cause role deviation, and 'exlicit challenges' that test an LLM's ability to perform simple tasks typically easy for humans but difficult for LLMs. Our evaluation of 9 leading models from the LMSYS leaderboard revealed that explicit challenges successfully detected LLMs in 78.4% of cases, while implicit challenges were effective in 22.9% of instances. User studies validate the real-world applicability of our methods, with humans outperforming LLMs on explicit challenges (78% vs 22% success rate). Our framework unexpectedly revealed that many study participants were using LLMs to complete tasks, demonstrating its effectiveness in detecting both AI impostors and human misuse of AI tools. This work addresses the critical need for reliable, real-time LLM detection methods in high-stakes conversations.
翻译:大型语言模型(LLMs)在对话中展现出令人警觉的模仿人类能力,引发了对其在诈骗和欺骗中潜在滥用的担忧。人类有权知晓自己是否正在与LLM进行对话。我们评估了旨在实时揭露LLM冒充者的文本提示挑战。为此,我们编译并发布了一个开源基准数据集,其中包含利用LLM指令遵循机制导致角色偏离的"隐性挑战",以及测试LLM执行对人类通常简单而对LLM困难的基本任务能力的"显性挑战"。我们对LMSYS排行榜上9个领先模型的评估显示,显性挑战在78.4%的情况下成功检测出LLM,而隐性挑战在22.9%的实例中有效。用户研究验证了我们方法的实际适用性,人类在显性挑战上的表现优于LLM(成功率78%对22%)。我们的框架意外揭示了许多研究参与者正在使用LLM完成任务,证明了其在检测AI冒充者和人类滥用AI工具方面的有效性。这项工作满足了高风险对话中对可靠、实时LLM检测方法的关键需求。