While transformer models exhibit strong capabilities on linguistic tasks, their complex architectures make them difficult to interpret. Recent work has aimed to reverse engineer transformer models into human-readable representations called circuits that implement algorithmic functions. We extend this research by analyzing and comparing circuits for similar sequence continuation tasks, which include increasing sequences of Arabic numerals, number words, and months. By applying circuit interpretability analysis, we identify a key sub-circuit in both GPT-2 Small and Llama-2-7B responsible for detecting sequence members and for predicting the next member in a sequence. Our analysis reveals that semantically related sequences rely on shared circuit subgraphs with analogous roles. Additionally, we show that this sub-circuit has effects on various math-related prompts, such as on intervaled circuits, Spanish number word and months continuation, and natural language word problems. Overall, documenting shared computational structures enables better model behavior predictions, identification of errors, and safer editing procedures. This mechanistic understanding of transformers is a critical step towards building more robust, aligned, and interpretable language models.
翻译:尽管Transformer模型在语言任务上展现出强大的能力,但其复杂的架构使其难以解释。近期研究旨在将Transformer模型逆向工程为称为"电路"的人类可读表示,这些电路实现了算法功能。我们通过分析和比较相似序列延续任务中的电路,扩展了这项研究,这些任务包括阿拉伯数字、数字单词和月份的递增序列。通过应用电路可解释性分析,我们在GPT-2 Small和Llama-2-7B中识别出一个关键子电路,该子电路负责检测序列成员并预测序列中的下一个成员。我们的分析表明,语义相关的序列依赖于具有类似角色的共享电路子图。此外,我们证明该子电路对各类数学相关提示具有影响,例如间隔电路、西班牙语数字单词与月份延续以及自然语言文字问题。总体而言,记录共享的计算结构能够实现更好的模型行为预测、错误识别以及更安全的编辑流程。对Transformer的这种机制性理解是构建更稳健、对齐且可解释的语言模型的关键一步。