Recent work has shown that the computations of Transformers can be simulated in the RASP family of programming languages. These findings have enabled improved understanding of the expressive capacity and generalization abilities of Transformers. In particular, Transformers have been suggested to length-generalize exactly on problems that have simple RASP programs. However, it remains open whether trained models actually implement simple interpretable programs. In this paper, we present a general method to extract such programs from trained Transformers. The idea is to faithfully re-parameterize a Transformer as a RASP program and then apply causal interventions to discover a small sufficient sub-program. In experiments on small Transformers trained on algorithmic and formal language tasks, we show that our method often recovers simple and interpretable RASP programs from length-generalizing transformers. Our results provide the most direct evidence so far that Transformers internally implement simple RASP programs.
翻译:近期研究表明,Transformer的计算过程可在RASP编程语言族中实现模拟。这些发现深化了我们对Transformer表达能力与泛化能力的理解。特别值得注意的是,Transformer在具有简单RASP程序的问题上展现出精确的长度泛化能力。然而,经过训练的模型是否真正实现了简单可解释的程序,这一问题仍有待探究。本文提出一种从训练完成的Transformer中提取此类程序的通用方法。其核心思想是将Transformer忠实地重参数化为RASP程序,继而通过因果干预技术发现一个精简的充分子程序。在针对算法任务与形式语言任务训练的小型Transformer实验中,本方法常能从具备长度泛化能力的Transformer中还原出简洁可解释的RASP程序。我们的研究结果为"Transformer内部实现了简单RASP程序"这一观点提供了迄今最直接的证据。