Multimodal emotion recognition in conversation (MER) aims to accurately identify emotions in conversational utterances by integrating multimodal information. Previous methods usually treat multimodal information as equal quality and employ symmetric architectures to conduct multimodal fusion. However, in reality, the quality of different modalities usually varies considerably, and utilizing a symmetric architecture is difficult to accurately recognize conversational emotions when dealing with uneven modal information. Furthermore, fusing multi-modality information in a single granularity may fail to adequately integrate modal information, exacerbating the inaccuracy in emotion recognition. In this paper, we propose a novel Cross-Modality Augmented Transformer with Hierarchical Variational Distillation, called CMATH, which consists of two major components, i.e., Multimodal Interaction Fusion and Hierarchical Variational Distillation. The former is comprised of two submodules, including Modality Reconstruction and Cross-Modality Augmented Transformer (CMA-Transformer), where Modality Reconstruction focuses on obtaining high-quality compressed representation of each modality, and CMA-Transformer adopts an asymmetric fusion strategy which treats one modality as the central modality and takes others as auxiliary modalities. The latter first designs a variational fusion network to fuse the fine-grained representations learned by CMA- Transformer into a coarse-grained representations. Then, it introduces a hierarchical distillation framework to maintain the consistency between modality representations with different granularities. Experiments on the IEMOCAP and MELD datasets demonstrate that our proposed model outperforms previous state-of-the-art baselines. Implementation codes can be available at https://github.com/ cjw-MER/CMATH.
翻译:对话多模态情感识别旨在通过融合多模态信息准确识别对话语句中的情感。现有方法通常将多模态信息视为同等质量,并采用对称架构进行多模态融合。然而,实际场景中不同模态的质量往往存在显著差异,在处理不均衡模态信息时,使用对称架构难以准确识别对话情感。此外,在单一粒度上融合多模态信息可能无法充分整合模态特征,从而加剧情感识别的不准确性。本文提出一种新颖的跨模态增强Transformer与分层变分蒸馏模型(简称CMATH),其包含两个核心组件:多模态交互融合模块与分层变分蒸馏模块。前者由模态重构和跨模态增强Transformer两个子模块构成:模态重构专注于获取各模态的高质量压缩表示;跨模态增强Transformer采用非对称融合策略,将某一模态设为中心模态,其余作为辅助模态。后者首先设计变分融合网络,将跨模态增强Transformer学习到的细粒度表示融合为粗粒度表示;随后引入分层蒸馏框架,以保持不同粒度模态表示间的一致性。在IEMOCAP和MELD数据集上的实验表明,本模型性能优于现有先进基线方法。实现代码已发布于https://github.com/cjw-MER/CMATH。