Electroencephalogram (EEG)-based emotion decoding can objectively quantify people's emotional state and has broad application prospects in human-computer interaction and early detection of emotional disorders. Recently emerging deep learning architectures have significantly improved the performance of EEG emotion decoding. However, existing methods still fall short of fully capturing the complex spatiotemporal dynamics of neural signals, which are crucial for representing emotion processing. This study proposes a Dynamic-Attention-based EEG State Transition (DAEST) modeling method to characterize EEG spatiotemporal dynamics. The model extracts spatiotemporal components of EEG that represent multiple parallel neural processes and estimates dynamic attention weights on these components to capture transitions in brain states. The model is optimized within a contrastive learning framework for cross-subject emotion recognition. The proposed method achieved state-of-the-art performance on three publicly available datasets: FACED, SEED, and SEED-V. It achieved 75.4% accuracy in the binary classification of positive and negative emotions and 59.3% in nine-class discrete emotion classification on the FACED dataset, 88.1% in the three-class classification of positive, negative, and neutral emotions on the SEED dataset, and 73.6% in five-class discrete emotion classification on the SEED-V dataset. The learned EEG spatiotemporal patterns and dynamic transition properties offer valuable insights into neural dynamics underlying emotion processing.
翻译:基于脑电图(EEG)的情绪解码能够客观量化人的情绪状态,在人机交互与情绪障碍早期检测中具有广阔的应用前景。近年来兴起的深度学习架构显著提升了EEG情绪解码的性能。然而,现有方法仍未能充分捕捉神经信号中复杂的时空动态特性,而这对于表征情绪处理过程至关重要。本研究提出一种基于动态注意力的EEG状态转移(DAEST)建模方法,以刻画EEG的时空动态特性。该模型提取表征多个并行神经过程的EEG时空成分,并通过估计这些成分上的动态注意力权重来捕捉大脑状态的转移过程。模型在对比学习框架下进行优化,用于跨被试情绪识别。所提方法在三个公开数据集(FACED、SEED与SEED-V)上取得了最先进的性能:在FACED数据集上,正负情绪二分类准确率达75.4%,九类离散情绪分类准确率达59.3%;在SEED数据集上,正、负、中性三类情绪分类准确率达88.1%;在SEED-V数据集上,五类离散情绪分类准确率达73.6%。学习到的EEG时空模式与动态转移特性为理解情绪处理背后的神经动力学机制提供了有价值的见解。