Understanding internal representations of neural models is a core interest of mechanistic interpretability. Due to its large dimensionality, the representation space can encode various aspects about inputs. To what extent are different aspects organized and encoded in separate subspaces? Is it possible to find these ``natural'' subspaces in a purely unsupervised way? Somewhat surprisingly, we can indeed achieve this and find interpretable subspaces by a seemingly unrelated training objective. Our method, neighbor distance minimization (NDM), learns non-basis-aligned subspaces in an unsupervised manner. Qualitative analysis shows subspaces are interpretable in many cases, and encoded information in obtained subspaces tends to share the same abstract concept across different inputs, making such subspaces similar to ``variables'' used by the model. We also conduct quantitative experiments using known circuits in GPT-2; results show a strong connection between subspaces and circuit variables. We also provide evidence showing scalability to 2B models by finding separate subspaces mediating context and parametric knowledge routing. Viewed more broadly, our findings offer a new perspective on understanding model internals and building circuits.
翻译:理解神经模型的内部表示是机制可解释性的核心关注点。由于其高维特性,表示空间能够编码输入数据的多个方面。不同方面在多大程度上被组织并编码在独立的子空间中?是否有可能以纯粹无监督的方式找到这些“自然”子空间?令人惊讶的是,我们确实能够通过一个看似无关的训练目标实现这一目标并找到可解释的子空间。我们的方法——邻域距离最小化(NDM)——以无监督方式学习非基对齐的子空间。定性分析表明,子空间在多数情况下具有可解释性,且所获子空间中编码的信息倾向于在不同输入间共享相同的抽象概念,使得这些子空间类似于模型使用的“变量”。我们还利用GPT-2中的已知电路进行了定量实验;结果显示子空间与电路变量之间存在强关联。我们进一步通过发现中介上下文与参数知识路由的独立子空间,证明了该方法可扩展至20亿参数模型。从更广阔的视角看,我们的发现为理解模型内部机制和构建电路提供了新的思路。