Utilizing functional near-infrared spectroscopy (fNIRS) signals for emotion recognition is a significant advancement in understanding human emotions. However, due to the lack of artificial intelligence data and algorithms in this field, current research faces the following challenges: 1) The portable wearable devices have higher requirements for lightweight models; 2) The objective differences of physiology and psychology among different subjects aggravate the difficulty of emotion recognition. To address these challenges, we propose a novel cross-subject fNIRS emotion recognition method, called the Online Multi-level Contrastive Representation Distillation framework (OMCRD). Specifically, OMCRD is a framework designed for mutual learning among multiple lightweight student networks. It utilizes multi-level fNIRS feature extractor for each sub-network and conducts multi-view sentimental mining using physiological signals. The proposed Inter-Subject Interaction Contrastive Representation (IS-ICR) facilitates knowledge transfer for interactions between student models, enhancing cross-subject emotion recognition performance. The optimal student network can be selected and deployed on a wearable device. Some experimental results demonstrate that OMCRD achieves state-of-the-art results in emotional perception and affective imagery tasks.
翻译:利用功能性近红外光谱(fNIRS)信号进行情绪识别是理解人类情绪的一项重要进展。然而,由于该领域缺乏人工智能数据和算法,当前研究面临以下挑战:1)便携式可穿戴设备对轻量级模型提出了更高要求;2)不同被试者之间生理和心理的客观差异加剧了情绪识别的难度。为应对这些挑战,我们提出了一种新颖的跨被试者fNIRS情绪识别方法,称为在线多层级对比表征蒸馏框架(OMCRD)。具体而言,OMCRD是一个为多个轻量级学生网络之间相互学习而设计的框架。它为每个子网络使用多层级fNIRS特征提取器,并利用生理信号进行多视角情感挖掘。所提出的被试间交互对比表征(IS-ICR)促进了学生模型间交互的知识迁移,从而提升了跨被试者情绪识别性能。最优的学生网络可以被选择并部署在可穿戴设备上。部分实验结果表明,OMCRD在情绪感知和情感意象任务中取得了最先进的结果。