Neural networks are growing more capable on their own, but we do not understand their neural mechanisms. Understanding these mechanisms' decision-making processes, or mechanistic interpretability, enables (1) accountability and control in high-stakes domains, (2) the study of digital brains and the emergence of cognition, and (3) discovery of new knowledge when AI systems outperform humans. This paper traces how attention head intervention emerged as a key method for causal interpretability of transformers. The evolution from visualization to intervention represents a paradigm shift from observing correlations to causally validating mechanistic hypotheses through direct intervention. Head intervention studies revealed robust empirical findings while also highlighting limitations that complicate interpretation. Recent work demonstrates that mechanistic understanding now enables targeted control of model behaviour, successfully suppressing toxic outputs and manipulating semantic content through selective attention head intervention, validating the practical utility of interpretability research for AI safety.
翻译:神经网络自身能力不断增强,但我们尚未理解其神经机制。理解这些机制的决策过程——即机制可解释性——能够实现:(1)高风险领域的问责与控制,(2)数字大脑研究与认知涌现研究,(3)当AI系统超越人类时的新知识发现。本文追溯了注意力头干预如何发展成为Transformer因果可解释性的关键方法。从可视化到干预的演进代表着从观察相关性到通过直接干预因果验证机制假说的范式转变。注意力头干预研究揭示了稳健的实证发现,同时也突显了使解释复杂化的局限性。近期研究表明,机制理解现已实现对模型行为的定向控制,通过选择性注意力头干预成功抑制有害输出并操纵语义内容,验证了可解释性研究对AI安全性的实际效用。