Multimodal sarcasm detection is a complex task that requires distinguishing subtle complementary signals across modalities while filtering out irrelevant information. Many advanced methods rely on learning shortcuts from datasets rather than extracting intended sarcasm-related features. However, our experiments show that shortcut learning impairs the model's generalization in real-world scenarios. Furthermore, we reveal the weaknesses of current modality fusion strategies for multimodal sarcasm detection through systematic experiments, highlighting the necessity of focusing on effective modality fusion for complex emotion recognition. To address these challenges, we construct MUStARD++$^{R}$ by removing shortcut signals from MUStARD++. Then, a Multimodal Conditional Information Bottleneck (MCIB) model is introduced to enable efficient multimodal fusion for sarcasm detection. Experimental results show that the MCIB achieves the best performance without relying on shortcut learning.
翻译:多模态讽刺检测是一项复杂任务,需要区分跨模态的微妙互补信号,同时滤除无关信息。许多先进方法依赖于从数据集中学习捷径,而非提取预期的讽刺相关特征。然而,我们的实验表明,捷径学习会损害模型在真实场景中的泛化能力。此外,我们通过系统实验揭示了当前多模态讽刺检测中模态融合策略的缺陷,强调了针对复杂情感识别聚焦有效模态融合的必要性。为解决这些挑战,我们通过从MUStARD++中移除捷径信号构建了MUStARD++$^{R}$。随后,引入多模态条件信息瓶颈(MCIB)模型,以实现讽刺检测的高效多模态融合。实验结果表明,MCIB在不依赖捷径学习的情况下取得了最佳性能。