Linear probes and sparse autoencoders consistently recover meaningful structure from transformer representations -- yet why should such simple methods succeed in deep, nonlinear systems? We show this is not merely an empirical regularity but a consequence of architectural necessity: transformers communicate information through linear interfaces (attention OV circuits, unembedding matrices), and any semantic feature decoded through such an interface must occupy a context-invariant linear subspace. We formalize this as the \emph{Invariant Subspace Necessity} theorem and derive the \emph{Self-Reference Property}: tokens directly provide the geometric direction for their associated features, enabling zero-shot identification of semantic structure without labeled data or learned probes. Empirical validation in eight classification tasks and four model families confirms the alignment between class tokens and semantically related instances. Our framework provides \textbf{a principled architectural explanation} for why linear interpretability methods work, unifying linear probes and sparse autoencoders.
翻译:线性探针和稀疏自编码器能够持续地从Transformer表示中恢复出有意义的结构——但为何如此简单的方法能在深度非线性系统中取得成功?我们证明这并非仅仅是经验规律,而是架构必然性的结果:Transformer通过线性接口(注意力OV电路、解嵌入矩阵)传递信息,任何通过此类接口解码的语义特征必然占据一个上下文不变的线性子空间。我们将其形式化为\emph{子空间不变性必然定理},并推导出\emph{自指特性}:词元直接为其关联特征提供几何方向,从而无需标注数据或训练探针即可实现语义结构的零样本识别。在八个分类任务和四个模型家族中的实证验证证实了类别词元与语义相关实例之间的几何对齐。我们的框架为线性可解释性方法的有效性提供了\emph{基于架构原理的解释},统一了线性探针与稀疏自编码器的理论基础。