Multimodal emotion recognition (MER), leveraging speech and text, has emerged as a pivotal domain within human-computer interaction, demanding sophisticated methods for effective multimodal integration. The challenge of aligning features across these modalities is significant, with most existing approaches adopting a singular alignment strategy. Such a narrow focus not only limits model performance but also fails to address the complexity and ambiguity inherent in emotional expressions. In response, this paper introduces a Multi-Granularity Cross-Modal Alignment (MGCMA) framework, distinguished by its comprehensive approach encompassing distribution-based, instance-based, and token-based alignment modules. This framework enables a multi-level perception of emotional information across modalities. Our experiments on IEMOCAP demonstrate that our proposed method outperforms current state-of-the-art techniques.
翻译:多模态情感识别(MER)通过融合语音与文本信息,已成为人机交互领域的关键研究方向,其核心挑战在于实现有效的多模态整合。跨模态特征对齐是其中的关键难题,现有方法大多采用单一的对齐策略。这种局限不仅制约了模型性能,也难以应对情感表达固有的复杂性与模糊性。为此,本文提出一种多粒度跨模态对齐(MGCMA)框架,其创新性在于整合了基于分布、基于实例和基于令牌的对齐模块,实现了跨模态情感信息的多层次感知。在IEMOCAP数据集上的实验表明,本文提出的方法优于当前最先进的技术。