Embodied question answering (EQA) in 3D environments often requires collecting context that is distributed across multiple viewpoints and partially occluded. However, most recent vision--language models (VLMs) are constrained to a fixed and finite set of input views, which limits their ability to acquire question-relevant context at inference time and hinders complex spatial reasoning. We propose Chain-of-View (CoV) prompting, a training-free, test-time reasoning framework that transforms a VLM into an active viewpoint reasoner through a coarse-to-fine exploration process. CoV first employs a View Selection agent to filter redundant frames and identify question-aligned anchor views. It then performs fine-grained view adjustment by interleaving iterative reasoning with discrete camera actions, obtaining new observations from the underlying 3D scene representation until sufficient context is gathered or a step budget is reached. We evaluate CoV on OpenEQA across four mainstream VLMs and obtain an average +11.56\% improvement in LLM-Match, with a maximum gain of +13.62\% on Qwen3-VL-Flash. CoV further exhibits test-time scaling: increasing the minimum action budget yields an additional +2.51\% average improvement, peaking at +3.73\% on Gemini-2.5-Flash. On ScanQA and SQA3D, CoV delivers strong performance (e.g., 116 CIDEr / 31.9 EM@1 on ScanQA and 51.1 EM@1 on SQA3D). Overall, these results suggest that question-aligned view selection coupled with open-view search is an effective, model-agnostic strategy for improving spatial reasoning in 3D EQA without additional training.
翻译:三维环境中的具身问答通常需要收集分布在多个视角且部分被遮挡的上下文信息。然而,当前大多数视觉-语言模型被限制在固定且有限的输入视角集合中,这限制了它们在推理时获取问题相关上下文的能力,并阻碍了复杂的空间推理。我们提出了链式视角提示,这是一种无需训练、在测试时进行推理的框架,通过从粗到精的探索过程,将视觉-语言模型转化为主动的视角推理器。CoV首先使用一个视角选择代理来过滤冗余帧并识别与问题对齐的锚定视角。随后,它通过将迭代推理与离散的相机动作交错执行,进行细粒度的视角调整,从底层的三维场景表示中获取新的观测,直到收集到足够的上下文信息或达到步骤预算。我们在OpenEQA上评估了CoV,并在四个主流视觉-语言模型上获得了平均+11.56%的LLM-Match提升,其中在Qwen3-VL-Flash上取得了最大+13.62%的增益。CoV进一步展现了测试时缩放特性:增加最小动作预算可带来额外平均+2.51%的提升,在Gemini-2.5-Flash上峰值达到+3.73%。在ScanQA和SQA3D上,CoV也表现出强劲性能(例如,在ScanQA上达到116 CIDEr / 31.9 EM@1,在SQA3D上达到51.1 EM@1)。总体而言,这些结果表明,将问题对齐的视角选择与开放视角搜索相结合,是一种无需额外训练即可有效提升三维具身问答中空间推理能力的、与模型无关的策略。