Decoding non-invasive brain recordings is crucial for advancing our understanding of human cognition, yet faces challenges from individual differences and complex neural signal representations. Traditional methods require custom models and extensive trials, and lack interpretability in visual reconstruction tasks. Our framework integrating integrates 3D brain structures with visual semantics by Vision Transformer 3D. The unified feature extractor aligns fMRI features with multiple levels of visual embeddings efficiently, removing the need for individual-specific models and allowing extraction from single-trial data. This extractor consolidates multi-level visual features into one network, simplifying integration with Large Language Models (LLMs). Additionally, we have enhanced the fMRI dataset with various fMRI-image related textual data to support multimodal large model development. The integration with LLMs enhances decoding capabilities, enabling tasks like brain captioning, question-answering, detailed descriptions, complex reasoning, and visual reconstruction. Our approach not only shows superior performance across these tasks but also precisely identifies and manipulates language-based concepts within brain signals, enhancing interpretability and providing deeper neural process insights. These advances significantly broaden non-invasive brain decoding applicability in neuroscience and human-computer interaction, setting the stage for advanced brain-computer interfaces and cognitive models.
翻译:解码非侵入性脑记录对于推进人类认知理解至关重要,但面临着个体差异和复杂神经信号表征等挑战。传统方法需要定制化模型和大量试验,且在视觉重建任务中缺乏可解释性。我们的框架通过Vision Transformer 3D将三维大脑结构与视觉语义相集成。统一特征提取器高效地将fMRI特征与多层次视觉嵌入对齐,无需特定个体模型,且可对单次试验数据进行提取。该提取器将多层级视觉特征整合至单一网络,简化了与大语言模型(LLMs)的集成。此外,我们通过添加多种fMRI-图像关联文本数据增强了fMRI数据集,以支持多模态大模型开发。与LLMs的集成增强了解码能力,实现了脑标注解、问答、详细描述、复杂推理和视觉重建等任务。我们的方法不仅在这些任务中展现出卓越性能,还能精确识别和操控脑信号中基于语言的概念,提升可解释性并提供更深入的神经过程洞察。这些进展显著拓展了非侵入性脑解码在神经科学和人机交互中的应用范围,为高级脑机接口和认知模型奠定基础。