Language models show a surprising range of capabilities, but the source of their apparent competence is unclear. Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state. Interventional experiments indicate this representation can be used to control the output of the network and create "latent saliency maps" that can help explain predictions in human terms.
翻译:语言模型展现出令人惊讶的多种能力,但其明显能力的来源尚不明确。这些网络仅仅是记忆了表面统计特征的集合,还是依赖于对生成其所见序列的过程的内部表征?我们通过将GPT模型的变体应用于简单棋盘游戏Othello中合法走子预测任务来研究这个问题。尽管该网络对游戏及其规则没有先验知识,但我们发现了棋盘状态涌现的非线性内部表征的证据。干预实验表明,这种表征可用于控制网络输出,并创建能以人类可理解的方式解释预测的"潜在显著性图谱"。