Emotion recognition using electroencephalography (EEG) signals has garnered widespread attention in recent years. However, existing studies have struggled to develop a sufficiently generalized model suitable for different datasets without re-training (cross-corpus). This difficulty arises because distribution differences across datasets far exceed the intra-dataset variability. To solve this problem, we propose a novel Soft Contrastive Masked Modeling (SCMM) framework. Inspired by emotional continuity, SCMM integrates soft contrastive learning with a new hybrid masking strategy to effectively mine the "short-term continuity" characteristics inherent in human emotions. During the self-supervised learning process, soft weights are assigned to sample pairs, enabling adaptive learning of similarity relationships across samples. Furthermore, we introduce an aggregator that weightedly aggregates complementary information from multiple close samples based on pairwise similarities among samples to enhance fine-grained feature representation, which is then used for original sample reconstruction. Extensive experiments on the SEED, SEED-IV and DEAP datasets show that SCMM achieves state-of-the-art (SOTA) performance, outperforming the second-best method by an average accuracy of 4.26% under two types of cross-corpus conditions (same-class and different-class) for EEG-based emotion recognition.
翻译:近年来,利用脑电图(EEG)信号进行情感识别受到了广泛关注。然而,现有研究难以开发出无需重新训练即可适用于不同数据集(跨数据集)的、具有足够泛化能力的模型。这一困难源于数据集间的分布差异远大于数据集内部的变异性。为解决此问题,我们提出了一种新颖的软对比掩码建模(SCMM)框架。受情感连续性启发,SCMM将软对比学习与一种新的混合掩码策略相结合,以有效挖掘人类情感固有的“短期连续性”特征。在自监督学习过程中,软权重被分配给样本对,从而能够自适应地学习样本间的相似性关系。此外,我们引入了一个聚合器,基于样本间的成对相似性,对来自多个相近样本的互补信息进行加权聚合,以增强细粒度特征表示,随后该表示被用于原始样本的重建。在SEED、SEED-IV和DEAP数据集上进行的大量实验表明,SCMM在基于脑电信号的情感识别的两种跨数据集条件(同类与异类)下,均取得了最先进的性能,平均准确率超出次优方法4.26%。