Predictions of word-by-word conditional probabilities from Transformer-based language models are often evaluated to model the incremental processing difficulty of human readers. In this paper, we argue that there is a confound posed by the most common method of aggregating subword probabilities of such language models into word probabilities. This is due to the fact that tokens in the subword vocabulary of most language models have leading whitespaces and therefore do not naturally define stop probabilities of words. We first prove that this can result in distributions over word probabilities that sum to more than one, thereby violating the axiom that $\mathsf{P}(\Omega) = 1$. This property results in a misallocation of word-by-word surprisal, where the unacceptability of the end of the current word is incorrectly carried over to the next word. Additionally, this implicit prediction of word boundaries incorrectly models psycholinguistic experiments where human subjects directly observe upcoming word boundaries. We present a simple decoding technique to reaccount the probability of the trailing whitespace into that of the current word, which resolves this confound. Experiments show that this correction reveals lower estimates of garden-path effects in transitive/intransitive sentences and poorer fits to naturalistic reading times.
翻译:基于Transformer的语言模型所输出的逐词条件概率预测常被用于模拟人类读者的增量处理难度。本文指出,将此类语言模型的子词概率聚合为词概率的最常用方法存在一个混淆因素。这是由于大多数语言模型的子词词汇中的标记都包含前导空格,因而无法自然定义词的终止概率。我们首先证明这可能导致词概率分布之和大于一,从而违反公理$\mathsf{P}(\Omega) = 1$。该特性会导致逐词惊异值的错误分配——当前词结尾的不可接受性被错误地传递至后续词汇。此外,这种对词边界的隐式预测错误地模拟了人类受试者直接观测后续词边界的心理语言学实验。我们提出一种简单的解码技术,将尾部空格的概率重新计入当前词的概率,从而消除这一混淆因素。实验表明,该修正揭示了及物/不及物句中花园路径效应的较低估计值,并且对自然阅读时长的拟合效果更差。