Transformer-based language models are treated as black-boxes because of their large number of parameters and complex internal interactions, which is a serious safety concern. Mechanistic Interpretability (MI) intends to reverse-engineer neural network behaviors in terms of human-understandable components. In this work, we focus on understanding how GPT-2 Small performs the task of predicting three-letter acronyms. Previous works in the MI field have focused so far on tasks that predict a single token. To the best of our knowledge, this is the first work that tries to mechanistically understand a behavior involving the prediction of multiple consecutive tokens. We discover that the prediction is performed by a circuit composed of 8 attention heads (~5% of the total heads) which we classified in three groups according to their role. We also demonstrate that these heads concentrate the acronym prediction functionality. In addition, we mechanistically interpret the most relevant heads of the circuit and find out that they use positional information which is propagated via the causal mask mechanism. We expect this work to lay the foundation for understanding more complex behaviors involving multiple-token predictions.
翻译:基于Transformer的语言模型因其参数规模庞大及内部交互复杂而被视为黑箱,这构成了严重的安全隐患。机制可解释性旨在通过人类可理解的组件逆向解析神经网络行为。本研究聚焦于理解GPT-2 Small如何执行三字母缩略词预测任务。此前机制可解释性领域的工作主要集中于单token预测任务。据我们所知,这是首次尝试从机制上理解涉及多连续token预测行为的研究。我们发现该预测由一个包含8个注意力头(约占全部注意力头的5%)的电路完成,并根据其功能将其分为三组。我们还证实这些注意力头集中了缩略词预测功能。此外,我们对电路中最重要的注意力头进行机制性解读,发现其通过因果掩码机制传播位置信息。期望本研究能为理解更复杂的多token预测行为奠定基础。