Spatial perception is a fundamental component of intelligence. While many studies highlight that large multimodal language models (MLMs) struggle to reason about space, they only test for static spatial reasoning, such as categorizing the relative positions of objects. Meanwhile, real-world deployment requires dynamic capabilities like perspective-taking and egocentric action recognition. As a roadmap to improving spatial intelligence, we introduce SAT, Spatial Aptitude Training, which goes beyond static relative object position questions to the more dynamic tasks. SAT contains 218K question-answer pairs for 22K synthetic scenes across a training and testing set. Generated using a photo-realistic physics engine, our dataset can be arbitrarily scaled and easily extended to new actions, scenes, and 3D assets. We find that even MLMs that perform relatively well on static questions struggle to accurately answer dynamic spatial questions. Further, we show that SAT instruction-tuning data improves not only dynamic spatial reasoning on SAT, but also zero-shot performance on existing real-image spatial benchmarks: $23\%$ on CVBench, $8\%$ on the harder BLINK benchmark, and $18\%$ on VSR. When instruction-tuned on SAT, our 13B model matches larger proprietary MLMs like GPT4-V and Gemini-3-1.0 in spatial reasoning. Our data/code is available at http://arijitray1993.github.io/SAT/ .
翻译:空间感知是智能的基本组成部分。尽管许多研究指出大型多模态语言模型(MLMs)在空间推理方面存在困难,但这些研究仅测试了静态空间推理能力,例如对物体相对位置进行分类。然而,实际应用需要动态能力,如视角采择和自我中心动作识别。作为提升空间智能的路线图,我们提出了SAT(空间能力训练),其超越了静态物体相对位置问题,涵盖了更具动态性的任务。SAT包含21.8万个问答对,涵盖训练集和测试集中的2.2万个合成场景。通过使用照片级真实感物理引擎生成,我们的数据集可任意扩展,并能轻松扩展到新动作、场景和3D资源。我们发现,即使在静态问题上表现相对较好的MLMs,也难以准确回答动态空间问题。此外,我们证明SAT指令微调数据不仅提升了在SAT上的动态空间推理能力,还提高了现有真实图像空间基准测试的零样本性能:在CVBench上提升$23\%$,在更难的BLINK基准上提升$8\%$,在VSR上提升$18\%$。当在SAT上进行指令微调后,我们的130亿参数模型在空间推理能力上可媲美GPT4-V和Gemini-3-1.0等更大的专有MLMs。我们的数据/代码可在http://arijitray1993.github.io/SAT/获取。