While recent work has shown transformers can learn addition, previous models exhibit poor prediction accuracy and are limited to small numbers. Furthermore, the relationship between single-task and multitask arithmetic capabilities remains unexplored. In this work, we analyze 44 autoregressive transformer models trained on addition, subtraction, or both. These include 16 addition-only models, 2 subtraction-only models, 8 "mixed" models trained to perform addition and subtraction, and 14 mixed models initialized with parameters from an addition-only model. The models span 5- to 15-digit questions, 2 to 4 attention heads, and 2 to 3 layers. We show that the addition models converge on a common logical algorithm, with most models achieving >99.999% prediction accuracy. We provide a detailed mechanistic explanation of how this algorithm is implemented within the network architecture. Subtraction-only models have lower accuracy. With the initialized mixed models, through parameter transfer experiments, we explore how multitask learning dynamics evolve, revealing that some features originally specialized for addition become polysemantic, serving both operations, and boosting subtraction accuracy. We explain the mixed algorithm mechanically. Finally, we introduce a reusable library of mechanistic interpretability tools to define, locate, and visualize these algorithmic circuits across multiple models.
翻译:尽管近期研究表明Transformer模型能够学习加法运算,但现有模型在预测精度方面表现欠佳,且仅限于处理较小数值。此外,单任务与多任务算术能力之间的关系尚未得到充分探索。本研究系统分析了44个在加法、减法或两种运算上训练的自回归Transformer模型,其中包括16个仅加法模型、2个仅减法模型、8个同时训练加法与减法的"混合"模型,以及14个通过仅加法模型参数初始化的混合模型。这些模型覆盖5至15位数运算问题,配备2至4个注意力头及2至3个网络层。实验表明,加法模型均收敛于统一的逻辑算法,绝大多数模型实现>99.999%的预测准确率。我们深入阐释了该算法在网络架构中的具体实现机制。仅减法模型则表现出较低的准确率。通过参数迁移实验,我们探究了初始化混合模型中多任务学习动态的演化过程:部分原本专用于加法的特征转变为多义性特征,可同时服务于两种运算,从而显著提升减法准确率。我们进一步从机制层面解析了混合算法的运作原理。最后,我们开发了可复用的机制可解释性工具库,能够跨模型定义、定位并可视化这些算法电路。