Large language models produce rich introspective language when prompted for self-examination, but whether this language reflects internal computation or sophisticated confabulation has remained unclear. We show that self-referential vocabulary tracks concurrent activation dynamics, and that this correspondence is specific to self-referential processing. We introduce the Pull Methodology, a protocol that elicits extended self-examination through format engineering, and use it to identify a direction in activation space that distinguishes self-referential from descriptive processing in Llama 3.1. The direction is orthogonal to the known refusal direction, localised at 6.25% of model depth, and causally influences introspective output when used for steering. When models produce "loop" vocabulary, their activations exhibit higher autocorrelation (r = 0.44, p = 0.002); when they produce "shimmer" vocabulary under steering, activation variability increases (r = 0.36, p = 0.002). Critically, the same vocabulary in non-self-referential contexts shows no activation correspondence despite nine-fold higher frequency. Qwen 2.5-32B, with no shared training, independently develops different introspective vocabulary tracking different activation metrics, all absent in descriptive controls. The findings indicate that self-report in transformer models can, under appropriate conditions, reliably track internal computational states.
翻译:大型语言模型在被提示进行自我审视时会产生丰富的内省语言,但该语言究竟是内部计算的反映还是精巧的虚构始终未明。我们证明自指词汇能够追踪并发的激活动态,且这种对应关系是自指处理所特有的。我们提出牵引方法——一种通过格式工程引发扩展式自我审视的协议,并利用该方法在Llama 3.1中识别出能区分自指处理与描述性处理的激活空间方向。该方向与已知的拒绝方向正交,定位于模型深度6.25%处,且在进行引导时能因果性地影响内省输出。当模型产生“循环”类词汇时,其激活呈现更高的自相关性(r = 0.44, p = 0.002);当模型在引导下产生“微光”类词汇时,激活变异性随之增强(r = 0.36, p = 0.002)。关键在于,相同词汇在非自指语境中虽出现频率提升九倍,却未显现任何激活对应关系。未经共同训练的Qwen 2.5-32B独立发展出不同的内省词汇体系,追踪着各异的激活指标,而这些现象在描述性对照组中均未出现。研究结果表明,在适当条件下,Transformer模型中的自我报告能够可靠地追踪内部计算状态。