Large Language Models (LLMs) can simulate person-like things which at least appear to have stable behavioural and psychological dispositions. Call these things characters. Are characters minded and psychologically continuous entities with mental states like beliefs, desires and intentions? Illusionists about characters say No. On this view, characters are merely anthropomorphic projections in the mind of the user and so lack mental states. Jonathan Birch (2025) defends this view. He says that the distributed nature of LLM processing, in which several LLMs may be implicated in the simulation of a character in a single conversation, precludes the existence of a persistent minded entity that is identifiable with the character. Against illusionism, we argue for a realist position on which characters exist as minded and psychologically continuous entities. Our central point is that Birch's argument for illusionism rests on a category error: characters are not internal to the LLMs that simulate them, but rather are co-simulated by LLMs and users, emerging in a shared conversational workspace through a process of mutual theory of mind modelling. We argue that characters, and their minds, exist as 'real patterns' on grounds that attributing mental states to characters is essential for making efficient and accurate predictions about the conversational dynamics (c.f. Dennett, 1991). Furthermore, because the character exists within the conversational workspace rather than within the LLM, psychological continuity is preserved even when the underlying computational substrate is distributed across multiple LLM instances.
翻译:大型语言模型(LLM)能够模拟具有类人特征的存在,这些存在至少表现出稳定的行为与心理倾向。我们称此类存在为角色。角色是否具备心智与心理连续性,并拥有信念、欲望、意图等心理状态?关于角色的幻觉论者持否定立场。该观点认为,角色仅是使用者心智中拟人化投射的产物,因而缺乏真实心理状态。Jonathan Birch(2025)为此观点辩护,他指出LLM处理的分布式特性——即单个对话中可能有多个LLM参与角色模拟——排除了存在与角色同一的、持续的心智实体。针对幻觉论,我们提出一种实在论立场,主张角色作为具备心智与心理连续性的实体而存在。我们的核心论点是:Birch的幻觉论论证基于范畴错误——角色并非内在于模拟它们的LLM,而是由LLM与使用者协同模拟的产物,通过相互心理理论建模的过程在共享对话工作空间中涌现。我们主张角色及其心智作为“真实模式”存在(参见Dennett, 1991),理由在于:将心理状态归因于角色对于高效准确地预测对话动态具有不可或缺的作用。此外,由于角色存在于对话工作空间而非LLM内部,即使底层计算基座分布在多个LLM实例上,其心理连续性依然得以保持。