Transformer-based language models have shown an excellent ability to effectively capture and utilize contextual information. Although various analysis techniques have been used to quantify and trace the contribution of single contextual cues to a target task such as subject-verb agreement or coreference resolution, scenarios in which multiple relevant cues are available in the context remain underexplored. In this paper, we investigate how language models handle gender agreement when multiple gender cue words are present, each capable of independently disambiguating a target gender pronoun. We analyze two widely used Transformer-based models: BERT, an encoder-based, and GPT-2, a decoder-based model. Our analysis employs two complementary approaches: context mixing analysis, which tracks information flow within the model, and a variant of activation patching, which measures the impact of cues on the model's prediction. We find that BERT tends to prioritize the first cue in the context to form both the target word representations and the model's prediction, while GPT-2 relies more on the final cue. Our findings reveal striking differences in how encoder-based and decoder-based models prioritize and use contextual information for their predictions.
翻译:基于Transformer的语言模型已展现出卓越的捕获与利用上下文信息的能力。尽管已有多种分析技术用于量化并追溯单一上下文线索(如主谓一致或共指消解)对目标任务的贡献,但语境中存在多个相关线索的场景仍未得到充分探索。本文研究了当多个性别线索词同时存在时语言模型如何处理性别一致性问题,其中每个线索词均可独立消除目标性别代词的歧义。我们分析了两种广泛使用的基于Transformer的模型:基于编码器的BERT与基于解码器的GPT-2。研究采用两种互补的分析方法:追踪模型内部信息流的上下文混合分析,以及衡量线索对模型预测影响的激活修补变体。研究发现,BERT倾向于优先使用语境中的首个线索来构建目标词表征及模型预测,而GPT-2则更依赖最终出现的线索。我们的发现揭示了基于编码器与基于解码器的模型在优先处理及利用上下文信息进行预测时存在的显著差异。