While large multi-modal models (LMMs) have exhibited impressive capabilities across diverse tasks, their effectiveness in handling complex tasks has been limited by the prevailing single-step reasoning paradigm. To this end, this paper proposes VoCoT, a multi-step Visually grounded object-centric Chain-of-Thought reasoning framework tailored for inference with LMMs. VoCoT is characterized by two key features: (1) object-centric reasoning paths that revolve around cross-modal shared object-level information, and (2) visually grounded representation of object concepts in a multi-modal interleaved and aligned manner, which effectively bridges the modality gap within LMMs during long-term generation. Additionally, we construct an instruction dataset to facilitate LMMs in adapting to reasoning with VoCoT. By introducing VoCoT into the prevalent open-source LMM architecture, we introduce VolCano. With only 7B parameters and limited input resolution, VolCano demonstrates excellent performance across various scenarios, surpassing SOTA models, including GPT-4V, in tasks requiring complex reasoning. Our code, data and model will be available at https://github.com/RupertLuo/VoCoT.
翻译:尽管大型多模态模型(LMMs)已在多样化任务中展现出卓越能力,但其处理复杂任务的有效性仍受限于主流的单步推理范式。为此,本文提出VoCoT——一种专为LMM推理设计的多步视觉基础以对象为中心的思维链推理框架。VoCoT具备两大核心特征:(1)围绕跨模态共享对象级信息构建的以对象为中心的推理路径;(2)通过多模态交错对齐方式实现对象概念的视觉基础表征,从而在长序列生成过程中有效弥合LMM内部的模态鸿沟。此外,我们构建了指令数据集以促进LMM适配VoCoT推理范式。通过将VoCoT集成到主流开源LMM架构中,我们提出了VolCano模型。仅凭70亿参数和有限输入分辨率,VolCano在多种场景中均表现出优异性能,在需要复杂推理的任务中超越了包括GPT-4V在内的当前最优模型。我们的代码、数据及模型将在https://github.com/RupertLuo/VoCoT 开源。