How might messages about large language models (LLMs) found in public discourse influence the way people think about and interact with these models? To explore this question, we randomly assigned participants (N = 470) to watch short informational videos presenting LLMs as either machines, tools, or companions -- or to watch no video. We then assessed how strongly they believed LLMs to possess various mental capacities, such as the ability to have intentions or remember things. We found that participants who watched video messages presenting LLMs as companions reported believing that LLMs more fully possessed these capacities than did participants in other groups. In a follow-up study (N = 604), we replicated these findings and found nuanced effects on how these videos also impact people's reliance on LLM-generated responses when seeking out factual information. Together, these studies suggest that messages about LLMs -- beyond technical advances -- may shape what people believe about these systems and how they rely on LLM-generated responses.
翻译:公共讨论中关于大语言模型(LLMs)的信息如何影响人们对这些模型的认知与交互方式?为探究此问题,我们随机分配参与者(N = 470)观看将LLMs分别呈现为机器、工具或伙伴的简短科普视频,或安排其不观看视频。随后,我们评估了参与者对LLM具备各类心智能力(如拥有意图或记忆事物)的相信程度。研究发现,观看将LLMs呈现为伙伴的视频信息的参与者,相较于其他组别参与者,更倾向于相信LLMs全面具备这些心智能力。在后续研究(N = 604)中,我们复现了上述发现,并进一步发现这些视频对人们在寻求事实信息时依赖LLM生成答案的程度产生了微妙影响。综合而言,这些研究表明,关于LLMs的信息——超越技术进步本身——可能塑造人们对这些系统的认知,并影响其对LLM生成答案的依赖模式。