As large language models (LLMs) advance in their linguistic capacity, understanding how they capture aspects of language competence remains a significant challenge. This study therefore employs psycholinguistic paradigms in English, which are well-suited for probing deeper cognitive aspects of language processing, to explore neuron-level representations in language model across three tasks: sound-shape association, sound-gender association, and implicit causality. Our findings indicate that while GPT-2-XL struggles with the sound-shape task, it demonstrates human-like abilities in both sound-gender association and implicit causality. Targeted neuron ablation and activation manipulation reveal a crucial relationship: When GPT-2-XL displays a linguistic ability, specific neurons correspond to that competence; conversely, the absence of such an ability indicates a lack of specialized neurons. This study is the first to utilize psycholinguistic experiments to investigate deep language competence at the neuron level, providing a new level of granularity in model interpretability and insights into the internal mechanisms driving language ability in the transformer-based LLM.
翻译:随着大型语言模型(LLM)语言能力的不断提升,理解其如何捕捉语言能力的各个方面仍是一个重大挑战。因此,本研究采用英语心理语言学范式——该范式非常适合探究语言处理中更深层次的认知方面——来探索语言模型在三个任务中的神经元级表征:声音-形状关联、声音-性别关联以及内隐因果关系。我们的研究结果表明,虽然GPT-2-XL在声音-形状任务上表现不佳,但在声音-性别关联和内隐因果关系方面展现出类人的能力。针对性的神经元消融与激活操控揭示了一个关键关系:当GPT-2-XL展现出某种语言能力时,特定神经元与该能力相对应;反之,若缺乏某种能力,则表明缺乏专门化的神经元。本研究首次利用心理语言学实验在神经元层面探究深层次语言能力,为模型可解释性提供了新的粒度层次,并为理解基于Transformer的大型语言模型中驱动语言能力的内在机制提供了新见解。