This study explores the temporal dynamics of language processing by examining the alignment between word representations from a pre-trained transformer-based language model, and EEG data. Using a Temporal Response Function (TRF) model, we investigate how neural activity corresponds to model representations across different layers, revealing insights into the interaction between artificial language models and brain responses during language comprehension. Our analysis reveals patterns in TRFs from distinct layers, highlighting varying contributions to lexical and compositional processing. Additionally, we used linear discriminant analysis (LDA) to isolate part-of-speech (POS) representations, offering insights into their influence on neural responses and the underlying mechanisms of syntactic processing. These findings underscore EEG's utility for probing language processing dynamics with high temporal resolution. By bridging artificial language models and neural activity, this study advances our understanding of their interaction at fine timescales.
翻译:本研究通过考察预训练的基于Transformer的语言模型中的词表征与脑电图(EEG)数据之间的对齐关系,探索语言处理的时间动态特性。利用时间响应函数(TRF)模型,我们研究了神经活动如何对应于模型不同层的表征,从而揭示了在语言理解过程中人工语言模型与大脑响应之间的相互作用。我们的分析显示了来自不同层的TRF模式,突显了词汇处理与组合处理的不同贡献。此外,我们采用线性判别分析(LDA)来分离词性(POS)表征,为理解它们对神经响应的影响以及句法处理的潜在机制提供了见解。这些发现强调了EEG在高时间分辨率下探测语言处理动态的有效性。通过连接人工语言模型与神经活动,本研究增进了我们对它们在精细时间尺度上相互作用的理解。