Generating human language through non-invasive brain-computer interfaces (BCIs) has the potential to unlock many applications, such as serving disabled patients and improving communication. Currently, however, generating language via BCIs has been previously successful only within a classification setup for selecting pre-generated sentence continuation candidates with the most likely cortical semantic representation. Inspired by recent research that revealed associations between the brain and the large computational language models, we propose a generative language BCI that utilizes the capacity of a large language model (LLM) jointly with a semantic brain decoder to directly generate language from functional magnetic resonance imaging (fMRI) input. The proposed model can generate coherent language sequences aligned with the semantic content of visual or auditory language stimuli perceived, without prior knowledge of any pre-generated candidates. We compare the language generated from the presented model with a random control, pre-generated language selection approach, and a standard LLM, which generates common coherent text solely based on the next word likelihood according to statistical language training data. The proposed model is found to generate language that is more aligned with semantic stimulus in response to which brain input is sampled. Our findings demonstrate the potential and feasibility of employing BCIs in direct language generation.
翻译:通过非侵入式脑机接口生成人类语言具有开启众多应用的潜力,例如服务残障患者和改善沟通。然而,目前通过脑机接口生成语言仅在分类框架下取得成功,即从预先生成的句子延续候选中选择最可能匹配皮层语义表征的选项。受近期揭示大脑与大型计算语言模型之间关联的研究启发,我们提出一种生成式语言脑机接口,该接口联合利用大型语言模型的能力与语义脑解码器,直接从功能性磁共振成像输入生成语言。所提模型能够生成与感知到的视觉或听觉语言刺激的语义内容相一致的连贯语言序列,且无需预先知晓任何预生成候选。我们将该模型生成的语言与随机对照组、预生成语言选择方法以及仅依据统计语言训练数据中下一个词概率生成通用连贯文本的标准大型语言模型进行了比较。研究发现,所提模型生成的语言与采集脑部输入时所响应的语义刺激更为匹配。我们的研究结果证明了将脑机接口应用于直接语言生成的潜力与可行性。