Speech language models align with human brain responses to natural language to an impressive degree. However, current models rely heavily on low-level speech features, indicating they lack brain-relevant semantics which limits their utility as model organisms of semantic processing in the brain. In this work, we address this limitation by inducing brain-relevant bias directly into the models via fine-tuning with fMRI recordings of people listening to natural stories, a process we name brain-tuning. After testing it on 3 different pretrained model families, we show that brain-tuning not only improves overall alignment with new brain recordings in semantic language regions, but also reduces the reliance on low-level speech features for this alignment. Excitingly, we further show that brain-tuning leads to 1) consistent improvements in performance on a range of downstream tasks and 2) a representational space with increased semantic preference. Our results provide converging evidence, for the first time, that incorporating brain signals into the training of language models improves the models' semantic understanding.
翻译:语音语言模型与人类大脑对自然语言的反应展现出令人印象深刻的对应程度。然而,当前模型严重依赖低层级语音特征,这表明它们缺乏与大脑相关的语义信息,从而限制了其作为大脑语义处理模型生物的应用价值。在本研究中,我们通过使用功能性磁共振成像记录(受试者聆听自然故事时获取)对模型进行微调,直接将大脑相关的偏置引入模型,这一过程我们称之为脑调优,以此解决上述局限性。在三个不同的预训练模型家族上进行测试后,我们发现,脑调优不仅提升了模型与新的脑部记录(在语义语言区域)的整体对应性,还减少了模型为实现这种对应性对低层级语音特征的依赖。令人兴奋的是,我们进一步证明,脑调优能带来:1)在一系列下游任务上性能的持续提升;以及 2)一个具有增强语义偏好的表征空间。我们的研究结果首次提供了汇聚性证据,表明将脑信号纳入语言模型的训练过程,能够提升模型的语义理解能力。