Eleven Large Language Models (LLMs) were assessed using a custom-made battery of false-belief tasks, considered a gold standard in testing Theory of Mind (ToM) in humans. The battery included 640 prompts spread across 40 diverse tasks, each one including a false-belief scenario, three closely matched true-belief control scenarios, and the reversed versions of all four. To solve a single task, a model needed to correctly answer 16 prompts across all eight scenarios. Smaller and older models solved no tasks; GPT-3-davinci-003 (from November 2022) and ChatGPT-3.5-turbo (from March 2023) solved 20% of the tasks; ChatGPT-4 (from June 2023) solved 75% of the tasks, matching the performance of six-year-old children observed in past studies. We explore the potential interpretation of these findings, including the intriguing possibility that ToM, previously considered exclusive to humans, may have spontaneously emerged as a byproduct of LLMs' improving language skills.
翻译:本研究采用一套自编的误信念任务组对十一种大型语言模型(LLMs)进行了评估,该任务组被视为测试人类心智理论(ToM)的黄金标准。任务组包含分布在40个不同任务中的640个提示项,每个任务包含一个误信念场景、三个严格匹配的真信念对照场景以及所有四个场景的反向版本。要完成单个任务,模型需要在全部八个场景中正确回答16个提示项。规模较小和较早期的模型未能完成任何任务;GPT-3-davinci-003(2022年11月版)和ChatGPT-3.5-turbo(2023年3月版)完成了20%的任务;ChatGPT-4(2023年6月版)完成了75%的任务,其表现与既往研究中观察到的六岁儿童水平相当。我们探讨了这些发现的潜在解释,包括一个引人深思的可能性:先前被认为人类独有的心智理论能力,可能已作为大型语言模型语言能力提升的副产品而自发涌现。