Current large vision-language models (LVLMs) typically rely on text-only reasoning based on a single-pass visual encoding, which often leads to loss of fine-grained visual information. Recently the proposal of ''thinking with images'' attempts to alleviate this limitation by manipulating images via external tools or code; however, the resulting visual states are often insufficiently grounded in linguistic semantics, impairing effective cross-modal alignment - particularly when visual semantics or geometric relationships must be reasoned over across distant regions or multiple images. To address these challenges, we propose ''chatting with images'', a new framework that reframes visual manipulation as language-guided feature modulation. Under the guidance of expressive language prompts, the model dynamically performs joint re-encoding over multiple image regions, enabling tighter coupling between linguistic reasoning and visual state updates. We instantiate this paradigm in ViLaVT, a novel LVLM equipped with a dynamic vision encoder explicitly designed for such interactive visual reasoning, and trained it with a two-stage curriculum combining supervised fine-tuning and reinforcement learning to promote effective reasoning behaviors. Extensive experiments across eight benchmarks demonstrate that ViLaVT achieves strong and consistent improvements, with particularly pronounced gains on complex multi-image and video-based spatial reasoning tasks.
翻译:当前的大型视觉语言模型通常依赖于基于单次视觉编码的纯文本推理,这往往导致细粒度视觉信息的丢失。近期提出的"图像思考"方法试图通过外部工具或代码操作图像来缓解这一局限;然而,生成的视觉状态往往未能充分植根于语言语义,从而损害了有效的跨模态对齐——特别是在需要对远距离区域或多个图像间的视觉语义或几何关系进行推理时。为应对这些挑战,我们提出"基于图像对话"的新框架,将视觉操作重新定义为语言引导的特征调制。在表达性语言提示的引导下,模型动态地对多个图像区域执行联合重编码,实现语言推理与视觉状态更新之间更紧密的耦合。我们在ViLaVT中实例化了这一范式——这是一种配备动态视觉编码器的新型大型视觉语言模型,该编码器专为此类交互式视觉推理而设计,并通过结合监督微调与强化学习的双阶段课程进行训练,以促进有效的推理行为。在八个基准测试上的大量实验表明,ViLaVT实现了显著且一致的性能提升,在复杂的多图像和基于视频的空间推理任务上表现尤为突出。