Are generative pre-trained transformer (GPT) models only trained to predict the next token, or do they implicitly learn a world model from which a sequence is generated one token at a time? We examine this question by deriving a causal interpretation of the attention mechanism in GPT, and suggesting a causal world model that arises from this interpretation. Furthermore, we propose that GPT-models, at inference time, can be utilized for zero-shot causal structure learning for in-distribution sequences. Empirical evaluation is conducted in a controlled synthetic environment using the setup and rules of the Othello board game. A GPT, pre-trained on real-world games played with the intention of winning, is tested on synthetic data that only adheres to the game rules. We find that the GPT model tends to generate next moves that adhere to the game rules for sequences for which the attention mechanism encodes a causal structure with high confidence. In general, in cases for which the GPT model generates moves that do not adhere to the game rules, it also fails to capture any causal structure.
翻译:生成式预训练Transformer(GPT)模型是否仅被训练用于预测下一个词元,抑或它们隐式地学习了一个世界模型,并基于该模型逐个词元地生成序列?我们通过推导GPT中注意力机制的因果解释来探讨这一问题,并提出一种基于该解释的因果世界模型。此外,我们提出,在推理阶段,GPT模型可用于对分布内序列进行零样本因果结构学习。实证评估在一个受控的合成环境中进行,使用了奥赛罗棋盘游戏的设置与规则。一个在真实世界以获胜意图进行的对局数据上预训练的GPT模型,在仅遵循游戏规则的合成数据上进行测试。我们发现,对于注意力机制以高置信度编码了因果结构的序列,GPT模型倾向于生成遵循游戏规则的下一步走法。总体而言,在GPT模型生成不遵循游戏规则的走法的情况下,它也未能捕捉到任何因果结构。