Emotional Mimicry Intensity (EMI) estimation serves as a critical technology for understanding human social behavior and enhancing human-computer interaction experiences, where the core challenge lies in dynamic correlation modeling and robust fusion of multimodal temporal signals. To address the limitations of existing methods in insufficient exploitation of modal synergistic effects, noise sensitivity, and limited fine-grained alignment capabilities, this paper proposes a dual-stage cross-modal alignment framework. First, we construct vision-text and audio-text contrastive learning networks based on an improved CLIP architecture, achieving preliminary alignment in the feature space through modality-decoupled pre-training. Subsequently, we design a temporal-aware dynamic fusion module that combines Temporal Convolutional Networks (TCN) and gated bidirectional LSTM to respectively capture the macro-evolution patterns of facial expressions and local dynamics of acoustic features. Innovatively, we introduce a quality-guided modality fusion strategy that enables modality compensation under occlusion and noisy scenarios through differentiable weight allocation. Experimental results on the Hume-Vidmimic2 dataset demonstrate that our method achieves an average Pearson correlation coefficient of 0.35 across six emotion dimensions, outperforming the best baseline by 40\%. Ablation studies further validate the effectiveness of the dual-stage training strategy and dynamic fusion mechanism, providing a novel technical pathway for fine-grained emotion analysis in open environments.
翻译:情感模仿强度估计作为理解人类社交行为与增强人机交互体验的关键技术,其核心挑战在于多模态时序信号的动态关联建模与鲁棒融合。针对现有方法在模态协同效应利用不足、噪声敏感性高以及细粒度对齐能力有限等方面的缺陷,本文提出一种双阶段跨模态对齐框架。首先,基于改进的CLIP架构构建视觉-文本与音频-文本对比学习网络,通过模态解耦预训练实现特征空间的初步对齐。随后,设计结合时序卷积网络与门控双向LSTM的时间感知动态融合模块,分别捕捉面部表情的宏观演化规律与声学特征的局部动态特性。创新性地引入质量引导的模态融合策略,通过可微分权重分配实现遮挡与噪声场景下的模态补偿。在Hume-Vidmimic2数据集上的实验结果表明,本方法在六个情感维度上平均皮尔逊相关系数达到0.35,较最佳基线提升40%。消融研究进一步验证了双阶段训练策略与动态融合机制的有效性,为开放环境下的细粒度情感分析提供了新的技术路径。