Object hallucination critically undermines the reliability of Multimodal Large Language Models, often stemming from a fundamental failure in cognitive introspection, where models blindly trust linguistic priors over specific visual evidence. Existing mitigations remain limited: contrastive decoding approaches operate superficially without rectifying internal semantic misalignments, while current latent steering methods rely on static vectors that lack instance-specific precision. We introduce Vision-Language Introspection (VLI), a training-free inference framework that simulates a metacognitive self-correction process. VLI first performs Attributive Introspection to diagnose hallucination risks via probabilistic conflict detection and localize the causal visual anchors. It then employs Interpretable Bi-Causal Steering to actively modulate the inference process, dynamically isolating visual evidence from background noise while neutralizing blind confidence through adaptive calibration. VLI achieves state-of-the-art performance on advanced models, reducing object hallucination rates by 12.67% on MMHal-Bench and improving accuracy by 5.8% on POPE.
翻译:物体幻觉严重削弱了多模态大语言模型的可靠性,其根源常在于认知内省的根本性失败——模型盲目依赖语言先验而非具体的视觉证据。现有缓解方法仍存在局限:对比解码方法仅进行表面操作而未修正内部语义错位,而当前潜在调控方法依赖静态向量,缺乏实例级精度。我们提出视觉语言内省(VLI),一种免训练推理框架,模拟元认知自我校正过程。VLI首先执行属性内省,通过概率冲突检测诊断幻觉风险并定位因果视觉锚点;随后采用可解释的双因果调控主动调制推理过程,动态隔离视觉证据与背景噪声,同时通过自适应校准消除盲目置信。VLI在先进模型上实现了最先进的性能,在MMHal-Bench上将物体幻觉率降低12.67%,在POPE上将准确率提升5.8%。