Emotion recognition is inherently ambiguous, with uncertainty arising both from rater disagreement and from discrepancies across modalities such as speech and text. There is growing interest in modeling rater ambiguity using label distributions. However, modality ambiguity remains underexplored, and multimodal approaches often rely on simple feature fusion without explicitly addressing conflicts between modalities. In this work, we propose AmbER$^2$, a dual ambiguity-aware framework that simultaneously models rater-level and modality-level ambiguity through a teacher-student architecture with a distribution-wise training objective. Evaluations on IEMOCAP and MSP-Podcast show that AmbER$^2$ consistently improves distributional fidelity over conventional cross-entropy baselines and achieves performance competitive with, or superior to, recent state-of-the-art systems. For example, on IEMOCAP, AmbER$^2$ achieves relative improvements of 20.3% on Bhattacharyya coefficient (0.83 vs. 0.69), 13.6% on R$^2$ (0.67 vs. 0.59), 3.8% on accuracy (0.683 vs. 0.658), and 4.5% on F1 (0.675 vs. 0.646). Further analysis across ambiguity levels shows that explicitly modeling ambiguity is particularly beneficial for highly uncertain samples. These findings highlight the importance of jointly addressing rater and modality ambiguity when building robust emotion recognition systems.
翻译:情感识别本质上具有模糊性,其不确定性既源于标注者之间的分歧,也源于跨模态(如语音与文本)之间的差异。利用标签分布对标注者模糊性进行建模的研究日益增多。然而,模态模糊性仍未得到充分探索,且多模态方法通常依赖于简单的特征融合,未能明确处理模态间的冲突。本文提出AmbER$^2$,一种双重模糊感知框架,通过一种具有分布式训练目标的师生架构,同时建模标注者层面和模态层面的模糊性。在IEMOCAP和MSP-Podcast数据集上的评估表明,AmbER$^2$相较于传统的交叉熵基线方法,持续提升了分布保真度,并取得了与近期先进系统相当或更优的性能。例如,在IEMOCAP上,AmbER$^2$在巴氏系数(0.83 vs. 0.69)、R$^2$(0.67 vs. 0.59)、准确率(0.683 vs. 0.658)和F1分数(0.675 vs. 0.646)上分别实现了20.3%、13.6%、3.8%和4.5%的相对提升。针对不同模糊程度样本的进一步分析表明,显式建模模糊性对于高度不确定的样本尤为有益。这些发现凸显了在构建鲁棒的情感识别系统时,联合处理标注者模糊性与模态模糊性的重要性。