A recent line of work has shown promise in using sparse autoencoders (SAEs) to uncover interpretable features in neural network representations. However, the simple linear-nonlinear encoding mechanism in SAEs limits their ability to perform accurate sparse inference. In this paper, we investigate sparse inference and learning in SAEs through the lens of sparse coding. Specifically, we show that SAEs perform amortised sparse inference with a computationally restricted encoder and, using compressed sensing theory, we prove that this mapping is inherently insufficient for accurate sparse inference, even in solvable cases. Building on this theory, we empirically explore conditions where more sophisticated sparse inference methods outperform traditional SAE encoders. Our key contribution is the decoupling of the encoding and decoding processes, which allows for a comparison of various sparse encoding strategies. We evaluate these strategies on two dimensions: alignment with true underlying sparse features and correct inference of sparse codes, while also accounting for computational costs during training and inference. Our results reveal that substantial performance gains can be achieved with minimal increases in compute cost. We demonstrate that this generalises to SAEs applied to large language models (LLMs), where advanced encoders achieve similar interpretability. This work opens new avenues for understanding neural network representations and offers important implications for improving the tools we use to analyse the activations of large language models.
翻译:近期一系列研究展示了使用稀疏自编码器(SAEs)揭示神经网络表示中可解释特征的前景。然而,SAE中简单的线性-非线性编码机制限制了其执行精确稀疏推断的能力。本文通过稀疏编码的视角研究SAE中的稀疏推断与学习。具体而言,我们证明SAE通过计算受限的编码器执行摊销式稀疏推断,并基于压缩感知理论证明该映射本质上不足以实现精确的稀疏推断——即使在可求解的情况下亦如此。基于该理论,我们通过实验探索了更复杂的稀疏推断方法优于传统SAE编码器的条件。我们的核心贡献在于解耦编码与解码过程,从而能够比较多种稀疏编码策略。我们从两个维度评估这些策略:与真实底层稀疏特征的对齐度、稀疏代码的正确推断能力,同时兼顾训练与推断过程中的计算成本。研究结果表明,仅需极小的计算成本增加即可实现显著的性能提升。我们证明该结论可推广至应用于大语言模型(LLMs)的SAE,其中先进编码器能够实现相似的可解释性。这项工作为理解神经网络表征开辟了新途径,并对改进分析大语言模型激活的工具具有重要启示。