Large vision-language models (VLMs) often benefit from intermediate visual cues, either injected via external tools or generated as latent visual tokens during reasoning, but these mechanisms still overlook fine-grained visual evidence (e.g., polylines in charts), generalize poorly across domains, and incur high inference-time cost. In this paper, we propose Bi-directional Perceptual Shaping (BiPS), which transforms question-conditioned masked views into bidirectional where-to-look signals that shape perception during training. BiPS first applies a KL-consistency constraint between the original image and an evidence-preserving view that keeps only question-relevant regions, encouraging coarse but complete coverage of supporting pixels. It then applies a KL-separation constraint between the original and an evidence-ablated view where critical pixels are masked so the image no longer supports the original answer, discouraging text-only shortcuts (i.e., answering from text alone) and enforcing fine-grained visual reliance. Across eight benchmarks, BiPS boosts Qwen2.5-VL-7B by 8.2% on average and shows strong out-of-domain generalization to unseen datasets and image types.
翻译:大型视觉语言模型(VLMs)通常受益于中间视觉线索的辅助,这些线索或通过外部工具注入,或在推理过程中生成为潜在视觉标记。然而,现有机制仍存在以下局限:忽视细粒度视觉证据(如图表中的折线)、跨领域泛化能力差,以及推理时成本高昂。本文提出双向感知塑造(BiPS),该方法将问题条件化的掩码视图转化为双向的"何处关注"信号,从而在训练过程中塑造模型的感知。BiPS首先在原始图像与仅保留问题相关区域的证据保留视图之间施加KL一致性约束,以鼓励模型对支持性像素进行粗略但完整的覆盖。随后,在原始图像与关键像素被掩码的证据消除视图之间施加KL分离约束,使得图像不再支持原始答案,从而抑制仅依赖文本的捷径行为(即仅从文本中获取答案),并强制模型依赖细粒度视觉信息。在八个基准测试上的实验表明,BiPS将Qwen2.5-VL-7B模型的平均性能提升了8.2%,并在未见过的数据集和图像类型上展现出强大的跨领域泛化能力。