Multimodal Large Language Models (MLLMs) excel at broad visual understanding but still struggle with fine-grained perception, where decisive evidence is small and easily overwhelmed by global context. Recent "Thinking-with-Images" methods alleviate this by iteratively zooming in and out regions of interest during inference, but incur high latency due to repeated tool calls and visual re-encoding. To address this, we propose Region-to-Image Distillation, which transforms zooming from an inference-time tool into a training-time primitive, thereby internalizing the benefits of agentic zooming into a single forward pass of an MLLM. In particular, we first zoom in to micro-cropped regions to let strong teacher models generate high-quality VQA data, and then distill this region-grounded supervision back to the full image. After training on such data, the smaller student model improves "single-glance" fine-grained perception without tool use. To rigorously evaluate this capability, we further present ZoomBench, a hybrid-annotated benchmark of 845 VQA data spanning six fine-grained perceptual dimensions, together with a dual-view protocol that quantifies the global--regional "zooming gap". Experiments show that our models achieve leading performance across multiple fine-grained perception benchmarks, and also improve general multimodal cognition on benchmarks such as visual reasoning and GUI agents. We further discuss when "Thinking-with-Images" is necessary versus when its gains can be distilled into a single forward pass. Our code is available at https://github.com/inclusionAI/Zooming-without-Zooming.
翻译:多模态大语言模型(MLLMs)在宏观视觉理解方面表现出色,但在细粒度感知任务上仍面临挑战,其中关键证据往往尺寸微小且易被全局上下文信息淹没。近期提出的“图像思维”方法通过在推理过程中迭代式地放大和缩小感兴趣区域来缓解此问题,但由于需要重复调用工具并重新编码视觉信息,导致延迟较高。为解决这一问题,我们提出区域到图像蒸馏方法,将缩放从推理阶段的工具转变为训练阶段的基本操作,从而将主动缩放的优势内化到MLLM的单次前向传播中。具体而言,我们首先对微裁剪区域进行放大,让强大的教师模型生成高质量视觉问答数据,然后将这种基于区域的监督信息蒸馏回完整图像。经过此类数据训练后,较小的学生模型无需使用工具即可提升“单次扫视”的细粒度感知能力。为严格评估该能力,我们进一步提出ZoomBench——一个包含845个视觉问答数据的混合标注基准测试集,涵盖六个细粒度感知维度,并采用双视图评估协议来量化全局与区域间的“缩放差距”。实验表明,我们的模型在多个细粒度感知基准测试中取得领先性能,同时在视觉推理和图形用户界面智能体等基准测试中提升了通用多模态认知能力。我们进一步探讨了“图像思维”在何种情况下必要,以及其增益何时可被蒸馏到单次前向传播中。代码已开源:https://github.com/inclusionAI/Zooming-without-Zooming。