Open-Vocabulary Multimodal Emotion Recognition (OV-MER) is inherently challenging due to the ambiguity of equivocal multimodal cues, which often stem from distinct unobserved situational dynamics. While Multimodal Large Language Models (MLLMs) offer extensive semantic coverage, their performance is often bottlenecked by premature commitment to dominant data priors, resulting in suboptimal heuristics that overlook crucial, complementary affective cues across modalities. We argue that effective affective reasoning requires more than surface-level association; it necessitates reconstructing nuanced emotional states by synthesizing multiple evidence-grounded rationales that reconcile these observations from diverse latent perspectives. We introduce HyDRA, a Hybrid-evidential Deductive Reasoning Architecture that formalizes inference as a Propose-Verify-Decide protocol. To internalize this abductive process, we employ reinforcement learning with hierarchical reward shaping, aligning the reasoning trajectories with final task performance to ensure they best reconcile the observed multimodal cues. Systematic evaluations validate our design choices, with HyDRA consistently outperforming strong baselines--especially in ambiguous or conflicting scenarios--while providing interpretable, diagnostic evidence traces.
翻译:开放词汇多模态情感识别(OV-MER)因其多模态线索固有的模糊性而极具挑战性,这种模糊性通常源于未观测到的复杂情境动态。尽管多模态大语言模型(MLLMs)提供了广泛的语义覆盖,但其性能常受限于对显性数据先验的过早依赖,导致形成次优的启发式策略,从而忽略了跨模态的关键互补性情感线索。我们认为,有效的情感推理不仅需要表层关联,更需通过综合多个基于证据的理性依据,从不同的潜在视角调和这些观测,从而重构细腻的情感状态。为此,我们提出了HyDRA,一种混合证据演绎推理架构,它将推理形式化为一个“提议-验证-决策”协议。为了内化这一溯因过程,我们采用了具有分层奖励塑形的强化学习,使推理轨迹与最终任务性能对齐,以确保其能最优地调和观测到的多模态线索。系统评估验证了我们设计选择的有效性:HyDRA在各项基准测试中均持续优于强基线模型——尤其在模糊或冲突场景下——同时提供了可解释的诊断性证据轨迹。