Neural networks are growing more capable on their own, but we do not understand their neural mechanisms. Understanding these mechanisms' decision-making processes, or mechanistic interpretability, enables (1) accountability and control in high-stakes domains, (2) the study of digital brains and the emergence of cognition, and (3) discovery of new knowledge when AI systems outperform humans. This paper traces how attention head intervention emerged as a key method for causal interpretability of transformers. The evolution from visualization to intervention represents a paradigm shift from observing correlations to causally validating mechanistic hypotheses through direct intervention. Head intervention studies revealed robust empirical findings while also highlighting limitations that complicate interpretation.
翻译:神经网络正变得越来越强大,但我们尚未理解其神经机制。理解这些机制中的决策过程——即机制可解释性——能够实现:(1)高风险领域的问责与控制,(2)数字大脑与认知涌现的研究,以及(3)当人工智能系统超越人类时新知识的发现。本文追溯了注意力头干预如何发展成为Transformer因果可解释性的关键方法。从可视化到干预的演进代表着从观察相关性到通过直接干预因果验证机制假说的范式转变。头部干预研究揭示了稳健的实证发现,同时也凸显了使解释复杂化的局限性。