Although language model scores are often treated as probabilities, their reliability as probability estimators has mainly been studied through calibration, overlooking other aspects. In particular, it is unclear whether language models produce the same value for different ways of assigning joint probabilities to word spans. Our work introduces a novel framework, ConTestS (Consistency Testing over Spans), involving statistical tests to assess score consistency across interchangeable completion and conditioning orders. We conduct experiments on post-release real and synthetic data to eliminate training effects. Our findings reveal that both Masked Language Models (MLMs) and autoregressive models exhibit inconsistent predictions, with autoregressive models showing larger discrepancies. Larger MLMs tend to produce more consistent predictions, while autoregressive models show the opposite trend. Moreover, for both model types, prediction entropies offer insights into the true word span likelihood and therefore can aid in selecting optimal decoding strategies. The inconsistencies revealed by our analysis, as well their connection to prediction entropies and differences between model types, can serve as useful guides for future research on addressing these limitations.
翻译:尽管语言模型的得分常被视为概率,但其作为概率估计器的可靠性主要通过校准进行研究,忽略了其他方面。特别地,目前尚不清楚语言模型是否为词跨度的不同联合概率分配方式产生相同的值。本文提出了一种新颖的框架——ConTestS(跨度一致性测试),该框架通过统计检验来评估可互换的补全顺序与条件顺序之间的得分一致性。我们在发布后的真实数据与合成数据上进行实验,以消除训练效应。研究结果表明,掩码语言模型(MLMs)与自回归模型均表现出不一致的预测,其中自回归模型的差异更为显著。较大规模的MLMs倾向于产生更一致的预测,而自回归模型则呈现相反趋势。此外,对于两种模型类型,预测熵能够反映真实词跨度似然,因此有助于选择最优解码策略。本分析揭示的不一致性及其与预测熵的关联,以及不同模型类型间的差异,可为未来研究如何解决这些局限性提供有益的指导。