Recent advances in text-only large language models (LLMs), such as DeepSeek-R1, demonstrate remarkable reasoning ability. However, these models remain fragile or entirely incapable when extended to multi-modal tasks. Existing approaches largely rely on single-form captions, which lack diversity and often fail to adapt across different types of Visual Question Answering (VQA) benchmarks. As a result, they provide no principled or efficient channel for transmitting fine-grained visual information. We introduce Seeing Eye, a modular framework that unlocks multimodal reasoning in text-only LLMs through an agent-based small VLM translator. This translator acts as a perception agent: it can invoke specialized tools (e.g., OCR and crop) and iteratively distill multimodal inputs into structured intermediate representations (SIRs) tailored to the question. These SIRs are then passed to the text-only LLM, which serves as a reasoning agent. Crucially, the translator and reasoner engage in multi-round feedback and interaction, enabling the extraction of targeted visual details and yielding more confident answers. Experiments on knowledge-intensive VQA benchmarks, including MMMU and MIA-Bench, demonstrate that Seeing Eye not only reduces inference cost but also surpasses much larger end-to-end VLMs. For example, an instantiation combining a 3B-parameter vision translator with an 8B-parameter language reasoner outperforms a monolithic 32B VLM on challenging knowledge-based questions. Our results highlight that decoupling perception from reasoning via agent information flow offers a scalable and plug-and-play pathway to multimodal reasoning, allowing strong text-only LLMs to fully leverage their reasoning capabilities. Code is available at: https://github.com/ulab-uiuc/SeeingEye
翻译:近期纯文本大语言模型(如DeepSeek-R1)在推理能力方面取得了显著进展,但在扩展至多模态任务时仍表现脆弱或完全失效。现有方法主要依赖单一形式的图像描述,缺乏多样性且难以适应不同类型的视觉问答基准测试,因此无法提供精细视觉信息传输的原则性高效通道。本文提出Seeing Eye,一种模块化框架,通过基于智能体的小型视觉语言模型翻译器,解锁纯文本大语言模型的多模态推理能力。该翻译器作为感知智能体,可调用专用工具(如OCR与图像裁剪),并将多模态输入迭代提炼为针对问题定制的结构化中间表示。这些中间表示随后传递给作为推理智能体的纯文本大语言模型。关键在于,翻译器与推理器通过多轮反馈与交互,实现针对性视觉细节提取,从而获得更可靠的答案。在知识密集型视觉问答基准测试(包括MMMU和MIA-Bench)上的实验表明,Seeing Eye不仅降低了推理成本,而且超越了规模更大的端到端视觉语言模型。例如,结合30亿参数视觉翻译器与80亿参数语言推理器的实例,在基于知识的挑战性问题中优于单体的320亿参数视觉语言模型。我们的研究结果强调,通过智能体信息流将感知与推理解耦,为多模态推理提供了可扩展的即插即用路径,使强大的纯文本大语言模型能够充分发挥其推理能力。代码发布于:https://github.com/ulab-uiuc/SeeingEye