As language models (LMs) deliver increasing performance on a range of NLP tasks, probing classifiers have become an indispensable technique in the effort to better understand their inner workings. A typical setup involves (1) defining an auxiliary task consisting of a dataset of text annotated with labels, then (2) supervising small classifiers to predict the labels from the representations of a pretrained LM as it processed the dataset. A high probing accuracy is interpreted as evidence that the LM has learned to perform the auxiliary task as an unsupervised byproduct of its original pretraining objective. Despite the widespread usage of probes, however, the robust design and analysis of probing experiments remains a challenge. We develop a formal perspective on probing using structural causal models (SCM). Specifically, given an SCM which explains the distribution of tokens observed during training, we frame the central hypothesis as whether the LM has learned to represent the latent variables of the SCM. Empirically, we extend a recent study of LMs in the context of a synthetic grid-world navigation task, where having an exact model of the underlying causal structure allows us to draw strong inferences from the result of probing experiments. Our techniques provide robust empirical evidence for the ability of LMs to induce the latent concepts underlying text.
翻译:随着语言模型(LMs)在各类自然语言处理任务上的性能不断提升,探测分类器已成为深入理解其内部工作机制不可或缺的技术。典型设置包括:(1)定义一个由带标注文本数据集组成的辅助任务;(2)监督小型分类器,使其根据预训练语言模型处理该数据集时生成的表示来预测标签。较高的探测准确率通常被解释为语言模型在其原始预训练目标的无监督学习过程中,已习得执行该辅助任务的证据。然而,尽管探测技术被广泛使用,其稳健的实验设计与分析仍面临挑战。我们基于结构因果模型(SCM)提出了探测的形式化视角。具体而言,给定一个解释训练期间观测到的词元分布的结构因果模型,我们将核心假设形式化为:语言模型是否已学会表示该结构因果模型中的潜在变量。在实证研究中,我们扩展了近期一项关于语言模型在合成网格世界导航任务中的研究,该场景下精确的底层因果结构模型使我们能够从探测实验结果中得出强有力的推论。我们的技术为语言模型推断文本背后潜在概念的能力提供了稳健的实证证据。