Elucidating the language-brain relationship requires bridging the methodological gap between the abstract theoretical frameworks of linguistics and the empirical neural data of neuroscience. Serving as an interdisciplinary cornerstone, computational neuroscience formalizes the hierarchical and dynamic structures of language into testable neural models through modeling, simulation, and data analysis. This enables a computational dialogue between linguistic hypotheses and neural mechanisms. Recent advances in deep learning, particularly large language models (LLMs), have powerfully advanced this pursuit. Their high-dimensional representational spaces provide a novel scale for exploring the neural basis of linguistic processing, while the "model-brain alignment" framework offers a methodology to evaluate the biological plausibility of language-related theories.
翻译:阐明语言与大脑的关系,需要弥合语言学的抽象理论框架与神经科学的经验性神经数据之间的方法论鸿沟。作为跨学科研究的基石,计算神经科学通过建模、仿真和数据分析,将语言的层级结构与动态特性形式化为可检验的神经模型。这为语言学假说与神经机制之间的计算对话提供了可能。深度学习的最新进展,尤其是大语言模型(LLMs),有力地推动了这一探索。其高维表征空间为探索语言加工的神经基础提供了新颖的尺度,而"模型-大脑对齐"框架则为评估语言相关理论的生物合理性提供了一种方法论。