While numerous works have assessed the generative performance of language models (LMs) on tasks requiring Theory of Mind reasoning, research into the models' internal representation of mental states remains limited. Recent work has used probing to demonstrate that LMs can represent beliefs of themselves and others. However, these claims are accompanied by limited evaluation, making it difficult to assess how mental state representations are affected by model design and training choices. We report an extensive benchmark with various LM types with different model sizes, fine-tuning approaches, and prompt designs to study the robustness of mental state representations and memorisation issues within the probes. Our results show that the quality of models' internal representations of the beliefs of others increases with model size and, more crucially, with fine-tuning. We are the first to study how prompt variations impact probing performance on theory of mind tasks. We demonstrate that models' representations are sensitive to prompt variations, even when such variations should be beneficial. Finally, we complement previous activation editing experiments on Theory of Mind tasks and show that it is possible to improve models' reasoning performance by steering their activations without the need to train any probe.
翻译:尽管已有大量研究评估了语言模型在需要心理理论推理任务上的生成性能,但针对模型内部心理状态表征的研究仍然有限。近期工作通过探针技术证明语言模型能够表征自我及他人的信念。然而,这些论断所依据的评估范围有限,难以判断心理状态表征如何受模型设计与训练选择的影响。我们构建了一个涵盖不同模型规模、微调方法和提示设计的多样化语言模型基准测试,以探究心理状态表征的鲁棒性及探针内的记忆化问题。实验结果表明,模型对他人信念的内部表征质量随模型规模提升而增强,且微调处理的影响更为关键。我们首次系统研究了提示词变体对心理理论任务探针性能的影响,证明即使是有益的提示变化也会导致模型表征的敏感性波动。最后,我们通过补充先前心理理论任务的激活编辑实验,证实在无需训练任何探针的情况下,通过调控激活向量即可提升模型的推理性能。