Continuous dimensional speech emotion recognition captures affective variation along valence, arousal, and dominance, providing finer-grained representations than categorical approaches. Yet most multimodal methods rely solely on global transcripts, leading to two limitations: (1) all words are treated equally, overlooking that emphasis on different parts of a sentence can shift emotional meaning; (2) only surface lexical content is represented, lacking higher-level interpretive cues. To overcome these issues, we propose MSF-SER (Multi-granularity Semantic Fusion for Speech Emotion Recognition), which augments acoustic features with three complementary levels of textual semantics--Local Emphasized Semantics (LES), Global Semantics (GS), and Extended Semantics (ES). These are integrated via an intra-modal gated fusion and a cross-modal FiLM-modulated lightweight Mixture-of-Experts (FM-MOE). Experiments on MSP-Podcast and IEMOCAP show that MSF-SER consistently improves dimensional prediction, demonstrating the effectiveness of enriched semantic fusion for SER.
翻译:连续维度语音情感识别沿着效价、唤醒度和支配度三个维度捕捉情感变化,相比分类方法能提供更细粒度的情感表征。然而,大多数多模态方法仅依赖全局文本转录,这导致两个局限:(1) 所有词语被同等对待,忽视了强调句子的不同部分可能改变情感含义;(2) 仅表征了表面的词汇内容,缺乏更高层次的解释性线索。为克服这些问题,我们提出了MSF-SER(用于语音情感识别的多粒度语义融合方法),该方法通过三个互补层次的文本语义——局部强调语义、全局语义和扩展语义——来增强声学特征。这些语义通过模态内门控融合和跨模态FiLM调制的轻量级专家混合模块进行集成。在MSP-Podcast和IEMOCAP数据集上的实验表明,MSF-SER能持续提升维度预测性能,证明了丰富语义融合对于语音情感识别的有效性。