Multimodal language models (MLLMs) are increasingly being applied in real-world environments, necessitating their ability to interpret 3D spaces and comprehend temporal dynamics. Current methods often rely on specialized architectural designs or task-specific fine-tuning to achieve this. We introduce Coarse Correspondences, a simple lightweight method that enhances MLLMs' spatial-temporal reasoning with 2D images as input, without modifying the architecture or requiring task-specific fine-tuning. Our method uses a lightweight tracking model to identify primary object correspondences between frames in a video or across different image viewpoints, and then conveys this information to MLLMs through visual prompting. We demonstrate that this simple training-free approach brings substantial gains to GPT4-V/O consistently on four benchmarks that require spatial-temporal reasoning, including +20.5\% improvement on ScanQA, +9.7\% on OpenEQA's episodic memory subset, +6.0\% on the long-form video benchmark EgoSchema, and +11\% on the R2R navigation benchmark. Additionally, we show that Coarse Correspondences can also enhance open-source MLLMs' spatial reasoning (by +6.9\% on ScanQA) when applied in both training and inference and that the improvement can generalize to unseen datasets such as SQA3D (+3.1\%). Taken together, we show that Coarse Correspondences effectively and efficiently boosts models' performance on downstream tasks requiring spatial-temporal reasoning.
翻译:多模态语言模型正日益应用于现实环境,这要求其具备三维空间解析与时间动态理解能力。现有方法通常依赖专用架构设计或任务特定微调来实现此目标。本文提出粗粒度对应关系——一种简单轻量的方法,该方法以二维图像作为输入,在不修改架构或无需任务特定微调的前提下,增强多模态语言模型的时空推理能力。我们的方法采用轻量级跟踪模型识别视频帧间或不同图像视角间的主要物体对应关系,随后通过视觉提示将此信息传递给多模态语言模型。实验证明,这种无需训练的简单方法在四个需要时空推理能力的基准测试中为GPT4-V/O带来持续显著提升:ScanQA提升20.5%,OpenEQA情景记忆子集提升9.7%,长视频基准EgoSchema提升6.0%,R2R导航基准提升11%。此外,我们证明当应用于训练与推理全过程时,粗粒度对应关系同样能增强开源多模态语言模型的空间推理能力(ScanQA提升6.9%),且该改进可泛化至未见数据集如SQA3D(提升3.1%)。综上所述,我们证明粗粒度对应关系能有效且高效地提升模型在需要时空推理的下游任务中的性能。