Recent work by Anthropic on Mechanistic interpretability claims to understand and control Large Language Models by extracting human-interpretable features from their neural activation patterns using sparse autoencoders (SAEs). If successful, this approach offers one of the most promising routes for human oversight in AI safety. We conduct an initial stress-test of these claims by replicating their main results with open-source SAEs for Llama 3.1. While we successfully reproduce basic feature extraction and steering capabilities, our investigation suggests that major caution is warranted regarding the generalizability of these claims. We find that feature steering exhibits substantial fragility, with sensitivity to layer selection, steering magnitude, and context. We observe non-standard activation behavior and demonstrate the difficulty to distinguish thematically similar features from one another. While SAE-based interpretability produces compelling demonstrations in selected cases, current methods often fall short of the systematic reliability required for safety-critical applications. This suggests a necessary shift in focus from prioritizing interpretability of internal representations toward reliable prediction and control of model output. Our work contributes to a more nuanced understanding of what mechanistic interpretability has achieved and highlights fundamental challenges for AI safety that remain unresolved.
翻译:Anthropic近期关于机制可解释性的研究声称,通过使用稀疏自编码器(SAEs)从神经网络激活模式中提取人类可解释的特征,能够理解并控制大语言模型。若该方法成功,将为人工智能安全领域的人类监督提供最具前景的路径之一。我们通过对Llama 3.1开源SAEs复现其主要结果,对这些主张进行了初步压力测试。虽然我们成功复现了基础特征提取与导向能力,但研究表明,这些主张的普适性需要高度谨慎对待。我们发现特征导向存在显著脆弱性,对层级选择、导向强度及上下文环境均表现出敏感性。我们观察到非标准激活行为,并证明区分主题相似特征的难度极大。尽管基于SAE的可解释性方法在特定案例中展现出令人信服的演示效果,但当前方法往往无法满足安全关键应用所需的系统可靠性。这表明研究重点需从优先考虑内部表征的可解释性,转向对模型输出的可靠预测与控制。本研究有助于更细致地理解机制可解释性已取得的成果,并揭示人工智能安全领域尚未解决的根本性挑战。