Generating human language through non-invasive brain-computer interfaces (BCIs) has the potential to unlock many applications, such as serving disabled patients and improving communication. Currently, however, generating language via BCIs has been previously successful only within a classification setup for selecting pre-generated sentence continuation candidates with the most likely cortical semantic representation. Inspired by recent research that revealed associations between the brain and the large computational language models, we propose a generative language BCI that utilizes the capacity of a large language model (LLM) jointly with a semantic brain decoder to directly generate language from functional magnetic resonance imaging (fMRI) input. The proposed model can generate coherent language sequences aligned with the semantic content of visual or auditory language stimuli perceived, without prior knowledge of any pre-generated candidates. We compare the language generated from the presented model with a random control, pre-generated language selection approach, and a standard LLM, which generates common coherent text solely based on the next word likelihood according to statistical language training data. The proposed model is found to generate language that is more aligned with semantic stimulus in response to which brain input is sampled. Our findings demonstrate the potential and feasibility of employing BCIs in direct language generation.
翻译:通过非侵入式脑机接口生成人类语言有望解锁诸多应用,例如服务残疾患者和改善沟通。然而,当前通过脑机接口生成语言仅在分类设置下取得成功,即从预生成的语句续写候选项中选出最可能的皮层语义表征。受近期揭示大脑与大型计算语言模型之间关联的研究启发,我们提出一种生成式语言脑机接口,该模型联合利用大型语言模型的能力与语义脑解码器,直接从功能性磁共振成像输入生成语言。所提出的模型能够生成与感知到的视觉或听觉语言刺激的语义内容一致的连贯语言序列,无需任何预生成候选项的先验知识。我们将该模型生成的语言与随机对照、预生成语言选择方法以及标准大型语言模型(该模型仅基于统计语言训练数据根据下一个词可能性生成通用连贯文本)进行比较。研究发现,所提出的模型生成的语言更符合采集脑输入时所对应的语义刺激。我们的研究结果展示了在直接语言生成中应用脑机接口的潜力与可行性。