Accurate 6-DoF object pose estimation and tracking are critical for reliable robotic manipulation. However, zero-shot methods often fail under viewpoint-induced ambiguities and fixed-camera setups struggle when objects move or become self-occluded. To address these challenges, we propose an active pose estimation pipeline that combines a Vision-Language Model (VLM) with "robotic imagination" to dynamically detect and resolve ambiguities in real time. In an offline stage, we render a dense set of views of the CAD model, compute the FoundationPose entropy for each view, and construct a geometric-aware prompt that includes low-entropy (unambiguous) and high-entropy (ambiguous) examples. At runtime, the system: (1) queries the VLM on the live image for an ambiguity score; (2) if ambiguity is detected, imagines a discrete set of candidate camera poses by rendering virtual views, scores each based on a weighted combination of VLM ambiguity probability and FoundationPose entropy, and then moves the camera to the Next-Best-View (NBV) to obtain a disambiguated pose estimation. Furthermore, since moving objects may leave the camera's field of view, we introduce an active pose tracking module: a diffusion-policy trained via imitation learning, which generates camera trajectories that preserve object visibility and minimize pose ambiguity. Experiments in simulation and real-world show that our approach significantly outperforms classical baselines.
翻译:精确的6自由度物体姿态估计与跟踪对于实现可靠的机器人操作至关重要。然而,零样本方法在视角引发的模糊性下常常失效,而固定相机配置在物体移动或发生自遮挡时也难以应对。为应对这些挑战,我们提出了一种主动姿态估计流程,该流程将视觉语言模型与"机器人想象"相结合,以动态检测并实时解决模糊性问题。在离线阶段,我们对CAD模型渲染密集视角集,计算每个视角的FoundationPose熵,并构建包含低熵(明确)与高熵(模糊)示例的几何感知提示。在运行时,系统执行以下步骤:(1) 对实时图像查询VLM获取模糊度评分;(2) 若检测到模糊性,则通过渲染虚拟视角生成离散的候选相机姿态集合,依据VLM模糊概率与FoundationPose熵的加权组合对每个姿态评分,随后移动相机至最优观测视角以获取去模糊化的姿态估计。此外,针对移动物体可能脱离相机视野的问题,我们引入了主动姿态跟踪模块:该模块采用通过模仿学习训练的扩散策略,生成能够保持物体可见性并最小化姿态模糊性的相机轨迹。仿真与真实环境实验表明,本方法显著优于经典基线模型。