Active perception enables robots to dynamically gather information by adjusting their viewpoints, a crucial capability for interacting with complex, partially observable environments. In this paper, we present AP-VLM, a novel framework that combines active perception with a Vision-Language Model (VLM) to guide robotic exploration and answer semantic queries. Using a 3D virtual grid overlaid on the scene and orientation adjustments, AP-VLM allows a robotic manipulator to intelligently select optimal viewpoints and orientations to resolve challenging tasks, such as identifying objects in occluded or inclined positions. We evaluate our system on two robotic platforms: a 7-DOF Franka Panda and a 6-DOF UR5, across various scenes with differing object configurations. Our results demonstrate that AP-VLM significantly outperforms passive perception methods and baseline models, including Toward Grounded Common Sense Reasoning (TGCSR), particularly in scenarios where fixed camera views are inadequate. The adaptability of AP-VLM in real-world settings shows promise for enhancing robotic systems' understanding of complex environments, bridging the gap between high-level semantic reasoning and low-level control.
翻译:主动感知使机器人能够通过调整视角动态收集信息,这是在复杂、部分可观测环境中进行交互的关键能力。本文提出AP-VLM,一种将主动感知与视觉-语言模型(VLM)相结合的新型框架,用于指导机器人探索并回答语义查询。通过在场景上叠加三维虚拟网格并进行朝向调整,AP-VLM使机器人操作器能够智能选择最优视角与朝向,以解决具有挑战性的任务,例如识别被遮挡或倾斜放置的物体。我们在两个机器人平台上评估了该系统:7自由度Franka Panda与6自由度UR5,并在具有不同物体配置的多种场景中进行测试。结果表明,AP-VLM在固定相机视角不足的场景中,显著优于被动感知方法及基线模型(包括Toward Grounded Common Sense Reasoning (TGCSR))。AP-VLM在真实环境中的适应性展现了增强机器人系统对复杂环境理解能力的潜力,为高层语义推理与底层控制之间的鸿沟搭建了桥梁。