Decoding emotion from brain activity could unlock a deeper understanding of the human experience. While a number of existing datasets align brain data with speech and with speech transcripts, no datasets have annotated brain data with sentiment. To bridge this gap, we explore the use of pre-trained Text-to-Sentiment models to annotate non invasive brain recordings, acquired using magnetoencephalography (MEG), while participants listened to audiobooks. Having annotated the text, we employ force-alignment of the text and audio to align our sentiment labels with the brain recordings. It is straightforward then to train Brainto-Sentiment models on these data. Experimental results show an improvement in balanced accuracy for Brain-to-Sentiment compared to baseline, supporting the proposed approach as a proof-of-concept for leveraging existing MEG datasets and learning to decode sentiment directly from the brain.
翻译:从大脑活动中解码情感有望开启对人类体验的更深入理解。尽管现有多个数据集将脑数据与语音及语音转录文本对齐,但尚无数据集对脑数据进行情感标注。为填补这一空白,本研究探索使用预训练的文本到情感模型,对参与者在收听有声书时通过脑磁图(MEG)采集的非侵入性脑记录进行标注。完成文本标注后,我们采用文本与音频的强制对齐方法,将情感标签与脑记录进行时序对齐。基于这些数据训练脑到情感模型便成为直接可行的任务。实验结果表明,与基线方法相比,脑到情感模型的平衡准确率有所提升,这为所提出的方法提供了概念验证,证明其能够有效利用现有MEG数据集并实现直接从大脑解码情感的学习框架。