What happens when people's beliefs are derived from information provided by an LLM? People's use of LLM chatbots as thought partners can contribute to cognitive offloading, which can have adverse effects on cognitive skills in cases of over-reliance. This paper defines and investigates a particular kind of cognitive offloading in human-AI interaction, "belief offloading," in which people's processes of forming and upholding beliefs are offloaded onto an AI system with downstream consequences on their behavior and the nature of their system of beliefs. Drawing on philosophy, psychology, and computer science research, we clarify the boundary conditions under which belief offloading occurs and provide a descriptive taxonomy of belief offloading and its normative implications. We close with directions for future work to assess the potential for and consequences of belief offloading in human-AI interaction.
翻译:当人们的信念源自大型语言模型(LLM)所提供的信息时,会发生什么?人们将LLM聊天机器人作为思维伙伴使用,可能促进认知卸载;在过度依赖的情况下,这会对认知技能产生不利影响。本文定义并研究了一种人机交互中特定的认知卸载类型——“信念卸载”,即人们形成和维持信念的过程被卸载到AI系统上,并对其行为及其信念体系的性质产生下游影响。借鉴哲学、心理学和计算机科学的研究,我们明确了信念卸载发生的边界条件,提供了关于信念卸载及其规范性影响的描述性分类。最后,我们提出了未来工作的方向,以评估人机交互中信念卸载的可能性及其后果。