Idioms present a unique challenge for language models due to their non-compositional figurative interpretations, which often strongly diverge from the idiom's literal interpretation. In this paper, we employ causal tracing to systematically analyze how pretrained causal transformers deal with this ambiguity. We localize three mechanisms: (i) Early sublayers and specific attention heads retrieve an idiom's figurative interpretation, while suppressing its literal interpretation. (ii) When disambiguating context precedes the idiom, the model leverages it from the earliest layer and later layers refine the interpretation if the context conflicts with the retrieved interpretation. (iii) Then, selective, competing pathways carry both interpretations: an intermediate pathway prioritizes the figurative interpretation and a parallel direct route favors the literal interpretation, ensuring that both readings remain available. Our findings provide mechanistic evidence for idiom comprehension in autoregressive transformers.
翻译:习语因其非组合性的具象解读常与字面意义显著偏离,对语言模型构成独特挑战。本文采用因果追踪方法,系统分析预训练因果Transformer模型如何处理这种歧义。我们定位出三种机制:(i) 早期子层与特定注意力头负责检索习语的具象解读,同时抑制其字面解读;(ii) 当消歧上下文出现在习语之前时,模型从最底层即开始利用该信息,若上下文与检索到的解读冲突,后续层将对解读进行修正;(iii) 随后,选择性竞争通路并行承载两种解读:中间通路优先处理具象解读,而平行的直接通路则偏向字面解读,确保两种解读可能性得以保留。本研究为自回归Transformer的习语理解机制提供了实证依据。