Recent studies in interpretability have explored the inner workings of transformer models trained on tasks across various domains, often discovering that these networks naturally develop surprisingly structured representations. When such representations comprehensively reflect the task domain's structure, they are commonly referred to as ``World Models'' (WMs). In this work, we discover such WMs in transformers trained on maze tasks. In particular, by employing Sparse Autoencoders (SAEs) and analysing attention patterns, we examine the construction of WMs and demonstrate consistency between the circuit analysis and the SAE feature-based analysis. We intervene upon the isolated features to confirm their causal role and, in doing so, find asymmetries between certain types of interventions. Surprisingly, we find that models are able to reason with respect to a greater number of active features than they see during training, even if attempting to specify these in the input token sequence would lead the model to fail. Futhermore, we observe that varying positional encodings can alter how WMs are encoded in a model's residual stream. By analyzing the causal role of these WMs in a toy domain we hope to make progress toward an understanding of emergent structure in the representations acquired by Transformers, leading to the development of more interpretable and controllable AI systems.
翻译:近期可解释性研究探索了在不同领域任务上训练的Transformer模型的内部工作机制,常发现这些网络会自然地形成具有惊人结构性的表征。当此类表征全面反映任务领域结构时,它们通常被称为“世界模型”。本工作中,我们在迷宫任务训练的Transformer中发现了此类世界模型。具体而言,通过采用稀疏自编码器并分析注意力模式,我们检验了世界模型的构建过程,并证明了电路分析与基于稀疏自编码器特征分析之间的一致性。我们对分离出的特征进行干预以确认其因果作用,并在此过程中发现特定类型干预之间存在不对称性。令人惊讶的是,我们发现模型能够基于比训练期间所见更多数量的活跃特征进行推理,即使尝试在输入标记序列中指定这些特征会导致模型失败。此外,我们观察到不同的位置编码会改变世界模型在模型残差流中的编码方式。通过在玩具领域分析这些世界模型的因果作用,我们希望推动对Transformer所获表征中涌现结构的理解,从而促进开发更具可解释性和可控性的人工智能系统。