How might messages about large language models (LLMs) found in public discourse influence the way people think about and interact with these models? To explore this question, we randomly assigned participants (N = 470) to watch short informational videos presenting LLMs as either machines, tools, or companions -- or to watch no video. We then assessed how strongly they believed LLMs to possess various mental capacities, such as the ability have intentions or remember things. We found that participants who watched video messages presenting LLMs as companions reported believing that LLMs more fully possessed these capacities than did participants in other groups. In a follow-up study (N = 604), we replicated these findings and found nuanced effects on how these videos also impact people's reliance on LLM-generated responses when seeking out factual information. Together, these studies suggest that messages about LLMs -- beyond technical advances -- may shape what people believe about these systems and how they rely on LLM-generated responses.
翻译:公共讨论中关于大型语言模型(LLMs)的信息如何影响人们对这些模型的认知与交互方式?为探究此问题,我们随机分配参与者(N = 470)观看将LLMs分别呈现为机器、工具或同伴的简短科普视频,或安排其不观看任何视频。随后,我们评估了参与者对LLMs具备各类心智能力(例如拥有意图或记忆事物)的相信程度。研究发现,观看将LLMs呈现为同伴的视频信息的参与者,相比其他组别的参与者,更倾向于相信LLMs全面具备这些能力。在后续研究(N = 604)中,我们复现了这些发现,并进一步发现这些视频对人们在寻求事实信息时依赖LLM生成回答的程度产生了微妙影响。综合而言,这些研究表明,关于LLMs的信息——超越技术进步本身——可能会塑造人们对这些系统的认知,并影响其对LLM生成回答的依赖模式。