Multimodal emotion recognition systems rely heavily on the full availability of modalities, suffering significant performance declines when modal data is incomplete. To tackle this issue, we present the Cross-Modal Alignment, Reconstruction, and Refinement (CM-ARR) framework, an innovative approach that sequentially engages in cross-modal alignment, reconstruction, and refinement phases to handle missing modalities and enhance emotion recognition. This framework utilizes unsupervised distribution-based contrastive learning to align heterogeneous modal distributions, reducing discrepancies and modeling semantic uncertainty effectively. The reconstruction phase applies normalizing flow models to transform these aligned distributions and recover missing modalities. The refinement phase employs supervised point-based contrastive learning to disrupt semantic correlations and accentuate emotional traits, thereby enriching the affective content of the reconstructed representations. Extensive experiments on the IEMOCAP and MSP-IMPROV datasets confirm the superior performance of CM-ARR under conditions of both missing and complete modalities. Notably, averaged across six scenarios of missing modalities, CM-ARR achieves absolute improvements of 2.11% in WAR and 2.12% in UAR on the IEMOCAP dataset, and 1.71% and 1.96% in WAR and UAR, respectively, on the MSP-IMPROV dataset.
翻译:多模态情感识别系统严重依赖于模态的完整可用性,当模态数据不完整时,其性能会显著下降。为解决这一问题,我们提出了跨模态对齐、重建与精炼(CM-ARR)框架,这是一种创新方法,通过依次进行跨模态对齐、重建和精炼三个阶段来处理缺失模态并增强情感识别。该框架利用基于分布的无监督对比学习来对齐异构模态分布,有效减少差异并建模语义不确定性。重建阶段应用标准化流模型来转换这些对齐后的分布并恢复缺失的模态。精炼阶段则采用基于点的有监督对比学习来打破语义关联并突出情感特征,从而丰富重建表征的情感内容。在IEMOCAP和MSP-IMPROV数据集上进行的大量实验证实了CM-ARR在模态缺失和完整条件下均具有优越性能。值得注意的是,在六种模态缺失场景的平均结果中,CM-ARR在IEMOCAP数据集上的WAR和UAR分别实现了2.11%和2.12%的绝对提升,在MSP-IMPROV数据集上的WAR和UAR则分别提升了1.71%和1.96%。