While the activations of neurons in deep neural networks usually do not have a simple human-understandable interpretation, sparse autoencoders (SAEs) can be used to transform these activations into a higher-dimensional latent space which may be more easily interpretable. However, these SAEs can have millions of distinct latent features, making it infeasible for humans to manually interpret each one. In this work, we build an open-source automated pipeline to generate and evaluate natural language explanations for SAE features using LLMs. We test our framework on SAEs of varying sizes, activation functions, and losses, trained on two different open-weight LLMs. We introduce five new techniques to score the quality of explanations that are cheaper to run than the previous state of the art. One of these techniques, intervention scoring, evaluates the interpretability of the effects of intervening on a feature, which we find explains features that are not recalled by existing methods. We propose guidelines for generating better explanations that remain valid for a broader set of activating contexts, and discuss pitfalls with existing scoring techniques. We use our explanations to measure the semantic similarity of independently trained SAEs, and find that SAEs trained on nearby layers of the residual stream are highly similar. Our large-scale analysis confirms that SAE latents are indeed much more interpretable than neurons, even when neurons are sparsified using top-$k$ postprocessing. Our code is available at https://github.com/EleutherAI/sae-auto-interp, and our explanations are available at https://huggingface.co/datasets/EleutherAI/auto_interp_explanations.
翻译:尽管深度神经网络中神经元的激活通常不具备简单的人类可理解解释,但稀疏自编码器(SAEs)可将这些激活转换为更高维的潜在空间,该空间可能更易于解释。然而,这些SAE可能包含数百万个不同的潜在特征,使得人工逐一解释变得不可行。本研究构建了一个开源自动化流程,利用LLM为SAE特征生成并评估自然语言解释。我们在两种不同开源权重的LLM上训练了不同规模、激活函数和损失函数的SAE,并在此框架上进行测试。我们提出了五种新技术来评估解释质量,其计算成本低于现有最优方法。其中干预评分技术通过评估对特征进行干预所产生效果的可解释性,成功解释了现有方法无法召回的特征。我们提出了生成更优解释的指导原则,这些解释能在更广泛的激活情境中保持有效性,并讨论了现有评分技术的缺陷。通过使用生成的解释,我们测量了独立训练的SAE之间的语义相似性,发现训练于残差流相邻层的SAE具有高度相似性。大规模分析证实,即使神经元经过top-$k$后处理稀疏化,SAE潜在特征的可解释性仍显著高于神经元。代码发布于https://github.com/EleutherAI/sae-auto-interp,解释数据发布于https://huggingface.co/datasets/EleutherAI/auto_interp_explanations。