We present a novel method, AutoSpatial, an efficient approach with structured spatial grounding to enhance VLMs' spatial reasoning. By combining minimal manual supervision with large-scale Visual Question-Answering (VQA) pairs auto-labeling, our approach tackles the challenge of VLMs' limited spatial understanding in social navigation tasks. By applying a hierarchical two-round VQA strategy during training, AutoSpatial achieves both global and detailed understanding of scenarios, demonstrating more accurate spatial perception, movement prediction, Chain of Thought (CoT) reasoning, final action, and explanation compared to other SOTA approaches. These five components are essential for comprehensive social navigation reasoning. Our approach was evaluated using both expert systems (GPT-4o, Gemini 2.0 Flash, and Claude 3.5 Sonnet) that provided cross-validation scores and human evaluators who assigned relative rankings to compare model performances across four key aspects. Augmented by the enhanced spatial reasoning capabilities, AutoSpatial demonstrates substantial improvements by averaged cross-validation score from expert systems in: perception & prediction (up to 10.71%), reasoning (up to 16.26%), action (up to 20.50%), and explanation (up to 18.73%) compared to baseline models trained only on manually annotated data.
翻译:我们提出了一种新颖的方法AutoSpatial,这是一种具有结构化空间基础的高效方法,旨在增强视觉语言模型(VLMs)的空间推理能力。通过将最小化的人工监督与大规模视觉问答(VQA)对自动标注相结合,我们的方法解决了VLMs在社交导航任务中空间理解有限的挑战。通过在训练中应用分层两轮VQA策略,AutoSpatial实现了对场景的全局和细节理解,与其他最先进(SOTA)方法相比,在空间感知、运动预测、思维链(CoT)推理、最终行动和解释方面均表现出更高的准确性。这五个组成部分对于全面的社交导航推理至关重要。我们的方法通过专家系统(GPT-4o、Gemini 2.0 Flash和Claude 3.5 Sonnet)提供交叉验证分数,以及人类评估者分配相对排名,在四个关键方面对模型性能进行比较评估。得益于增强的空间推理能力,与仅在人工标注数据上训练的基线模型相比,AutoSpatial在专家系统的平均交叉验证分数上取得了显著提升:感知与预测(最高提升10.71%)、推理(最高提升16.26%)、行动(最高提升20.50%)和解释(最高提升18.73%)。