Understanding and attributing mental states, known as Theory of Mind (ToM), emerges as a fundamental capability for human social reasoning. While Large Language Models (LLMs) appear to possess certain ToM abilities, the mechanisms underlying these capabilities remain elusive. In this study, we discover that it is possible to linearly decode the belief status from the perspectives of various agents through neural activations of language models, indicating the existence of internal representations of self and others' beliefs. By manipulating these representations, we observe dramatic changes in the models' ToM performance, underscoring their pivotal role in the social reasoning process. Additionally, our findings extend to diverse social reasoning tasks that involve different causal inference patterns, suggesting the potential generalizability of these representations.
翻译:理解与归因心理状态,即心智理论(Theory of Mind, ToM),是人类社会推理的一项基本能力。尽管大型语言模型(LLMs)似乎具备某些心智理论能力,但这些能力背后的机制仍不明确。在本研究中,我们发现通过语言模型的神经激活,能够线性解码不同主体视角下的信念状态,这表明模型内部存在对自我及他者信念的表征。通过操控这些表征,我们观察到模型在心智理论任务上的表现发生显著变化,凸显了这些表征在社会推理过程中的关键作用。此外,我们的发现进一步延伸到涉及不同因果推理模式的多样化社会推理任务中,表明这些表征可能具有潜在的泛化性。