Large Vision-Language Models (LVLMs) have recently demonstrated amazing success in multi-modal tasks, including advancements in Multi-modal Chain-of-Thought (MCoT) reasoning. Despite these successes, current benchmarks still follow a traditional paradigm with multi-modal input and text-modal output, which leads to significant drawbacks such as missing visual operations and vague expressions. Motivated by this, we introduce a novel Chain of Multi-modal Thought (CoMT) benchmark to address these limitations. Different from the traditional MCoT benchmark, CoMT requires both multi-modal input and multi-modal reasoning output, aiming to mimic human-like reasoning that inherently integrates visual operation. Specifically, CoMT consists of four categories: (1) Visual Creation, (2) Visual Deletion, (3) Visual Update, and (4) Visual Selection to comprehensively explore complex visual operations and concise expression in real scenarios. We evaluate various LVLMs and strategies on CoMT, revealing some key insights into the capabilities and limitations of the current approaches. We hope that CoMT can inspire more research on introducing multi-modal generation into the reasoning process.
翻译:大型视觉语言模型(LVLMs)近期在多模态任务中展现出惊人的成功,包括在多模态思维链(MCoT)推理方面的进展。尽管取得了这些成功,当前的基准测试仍遵循多模态输入与文本模态输出的传统范式,这导致了显著的缺陷,例如视觉操作的缺失和表达的模糊性。受此启发,我们引入了一种新颖的多模态思维链(CoMT)基准以应对这些局限。与传统MCoT基准不同,CoMT要求同时具备多模态输入和多模态推理输出,旨在模拟人类天生融合视觉操作的类人推理过程。具体而言,CoMT包含四个类别:(1)视觉创建,(2)视觉删除,(3)视觉更新,以及(4)视觉选择,以全面探索真实场景中的复杂视觉操作与简洁表达。我们在CoMT上评估了多种LVLMs及其策略,揭示了当前方法能力与局限性的若干关键见解。我们希望CoMT能够激发更多关于将多模态生成引入推理过程的研究。