Recent advances in decoding language from brain signals (EEG and MEG) have been significantly driven by pre-trained language models, leading to remarkable progress on publicly available non-invasive EEG/MEG datasets. However, previous works predominantly utilize teacher forcing during text generation, leading to significant performance drops without its use. A fundamental issue is the inability to establish a unified feature space correlating textual data with the corresponding evoked brain signals. Although some recent studies attempt to mitigate this gap using an audio-text pre-trained model, Whisper, which is favored for its signal input modality, they still largely overlook the inherent differences between audio signals and brain signals in directly applying Whisper to decode brain signals. To address these limitations, we propose a new multi-stage strategy for semantic brain signal decoding via vEctor-quantized speCtrogram reconstruction for WHisper-enhanced text generatiOn, termed BrainECHO. Specifically, BrainECHO successively conducts: 1) Discrete autoencoding of the audio spectrogram; 2) Brain-audio latent space alignment; and 3) Semantic text generation via Whisper finetuning. Through this autoencoding--alignment--finetuning process, BrainECHO outperforms state-of-the-art methods under the same data split settings on two widely accepted resources: the EEG dataset (Brennan) and the MEG dataset (GWilliams). The innovation of BrainECHO, coupled with its robustness and superiority at the sentence, session, and subject-independent levels across public datasets, underscores its significance for language-based brain-computer interfaces.
翻译:近年来,从脑信号(EEG和MEG)解码语言的研究在预训练语言模型的推动下取得了显著进展,在公开的非侵入式EEG/MEG数据集上实现了显著突破。然而,先前的研究在文本生成过程中主要依赖教师强制策略,导致在不使用该策略时性能大幅下降。一个根本问题在于无法建立一个统一特征空间,将文本数据与相应的诱发脑信号关联起来。尽管最近一些研究尝试利用音频-文本预训练模型Whisper(因其信号输入模态而受到青睐)来弥合这一差距,但在直接将Whisper应用于脑信号解码时,它们仍然很大程度上忽视了音频信号与脑信号之间的固有差异。为应对这些局限性,我们提出了一种新的多阶段策略,通过矢量量化频谱图重构实现语义脑信号解码,用于Whisper增强的文本生成,该方法命名为BrainECHO。具体而言,BrainECHO依次执行:1)音频频谱图的离散自编码;2)脑-音频潜在空间对齐;3)通过Whisper微调实现语义文本生成。通过这种自编码-对齐-微调流程,BrainECHO在两个广泛认可的公开数据集——EEG数据集(Brennan)和MEG数据集(GWilliams)上,在相同数据划分设置下超越了现有最优方法。BrainECHO的创新性,结合其在句子、会话及被试独立层面在公开数据集上展现的鲁棒性和优越性,突显了其在基于语言的脑机接口领域的重要意义。