In recent years, Multimodal Emotion Recognition (MER) has made substantial progress. Nevertheless, most existing approaches neglect the semantic inconsistencies that may arise across modalities, such as conflicting emotional cues between text and visual inputs. Besides, current methods are often dominated by the text modality due to its strong representational capacity, which can compromise recognition accuracy. To address these challenges, we propose a model termed Calibrated Multimodal Consensus (CMC). CMC introduces a Pseudo Label Generation Module (PLGM) to produce pseudo unimodal labels, enabling unimodal pretraining in a self-supervised fashion. It then employs a Parameter-free Fusion Module (PFM) and a Multimodal Consensus Router (MCR) for multimodal finetuning, thereby mitigating text dominance and guiding the fusion process toward a more reliable consensus. Experimental results demonstrate that CMC achieves performance on par with or superior to state-of-the-art methods across four datasets, CH-SIMS, CH-SIMS v2, CMU-MOSI, and CMU-MOSEI, and exhibits notable advantages in scenarios with semantic inconsistencies on CH-SIMS and CH-SIMS v2. The implementation of this work is publicly accessible at https://github.com/gw-zhong/CMC.
翻译:近年来,多模态情感识别(MER)取得了显著进展。然而,现有方法大多忽视了跨模态可能出现的语义不一致问题,例如文本与视觉输入之间的情感线索冲突。此外,由于文本模态强大的表征能力,当前方法常受其主导,这可能损害识别精度。为应对这些挑战,我们提出了一种称为校准多模态共识(CMC)的模型。CMC引入了一个伪标签生成模块(PLGM)来生成伪单模态标签,从而以自监督方式进行单模态预训练。随后,它采用一个无参数融合模块(PFM)和一个多模态共识路由器(MCR)进行多模态微调,以此缓解文本主导性,并引导融合过程趋向更可靠的共识。实验结果表明,CMC在CH-SIMS、CH-SIMS v2、CMU-MOSI和CMU-MOSEI四个数据集上取得了与最先进方法相当或更优的性能,并在CH-SIMS和CH-SIMS v2上存在语义不一致的场景中展现出显著优势。本工作的实现代码已在https://github.com/gw-zhong/CMC公开。