Omni-modal large language models (omni LLMs) have recently achieved strong performance across audiovisual understanding tasks, yet they remain highly susceptible to cross-modal hallucinations arising from spurious correlations and dominant language priors. In this work, we propose Modality-Decoupled Direct Preference Optimization (MoD-DPO), a simple and effective framework for improving modality grounding in omni LLMs. MoD-DPO introduces modality-aware regularization terms that explicitly enforce invariance to corruptions in irrelevant modalities and sensitivity to perturbations in relevant modalities, thereby reducing unintended cross-modal interactions. To further mitigate over-reliance on textual priors, we incorporate a language-prior debiasing penalty that discourages hallucination-prone text-only responses. Extensive experiments across multiple audiovisual hallucination benchmarks demonstrate that MoD-DPO consistently improves perception accuracy and hallucination resistance, outperforming previous preference optimization baselines under similar training budgets. Our findings underscore the importance of modality-faithful alignment and demonstrate a scalable path toward more reliable and resilient multimodal foundation models.
翻译:通用模态大语言模型(omni LLMs)近期在视听理解任务中展现出强大性能,但它们仍极易受到由虚假关联和主导性语言先验引发的跨模态幻觉影响。本研究提出模态解耦直接偏好优化(MoD-DPO),一个用于改进通用大语言模型中模态基础能力的简单而有效的框架。MoD-DPO引入了模态感知正则化项,明确强制模型对无关模态的干扰保持不变,同时对相关模态的扰动保持敏感,从而减少非预期的跨模态交互。为进一步缓解对文本先验的过度依赖,我们引入了一种语言先验去偏惩罚项,以抑制容易产生幻觉的纯文本响应。在多个视听幻觉基准测试上进行的大量实验表明,MoD-DPO持续提升了感知准确性和抗幻觉能力,在相近的训练预算下优于先前的偏好优化基线方法。我们的研究结果强调了模态忠实对齐的重要性,并展示了一条通向更可靠、更具鲁棒性的多模态基础模型的可扩展路径。