Task-aware navigation continues to be a challenging area of research, especially in scenarios involving open vocabulary. Previous studies primarily focus on finding suitable locations for task completion, often overlooking the importance of the robot's pose. However, the robot's orientation is crucial for successfully completing tasks because of how objects are arranged (e.g., to open a refrigerator door). Humans intuitively navigate to objects with the right orientation using semantics and common sense. For instance, when opening a refrigerator, we naturally stand in front of it rather than to the side. Recent advances suggest that Vision-Language Models (VLMs) can provide robots with similar common sense. Therefore, we develop a VLM-driven method called Navigation-to-Gaze (Navi2Gaze) for efficient navigation and object gazing based on task descriptions. This method uses the VLM to score and select the best pose from numerous candidates automatically. In evaluations on multiple photorealistic simulation benchmarks, Navi2Gaze significantly outperforms existing approaches and precisely determines the optimal orientation relative to target objects.
翻译:任务感知导航仍然是一个具有挑战性的研究领域,尤其是在涉及开放词汇的场景中。先前的研究主要集中于寻找完成任务合适的位置,常常忽视机器人姿态的重要性。然而,由于物体的摆放方式(例如,要打开冰箱门),机器人的朝向对于成功完成任务至关重要。人类会本能地利用语义和常识,以正确的朝向导航至物体前。例如,当打开冰箱时,我们自然会站在它的前面,而不是侧面。最近的研究进展表明,视觉语言模型可以为机器人提供类似的常识。因此,我们开发了一种名为导航至凝视的VLM驱动方法,用于基于任务描述进行高效导航和物体凝视。该方法利用VLM对大量自动生成的候选姿态进行评分并选择最佳姿态。在多个逼真模拟基准测试上的评估表明,Navi2Gaze显著优于现有方法,并能精确确定相对于目标物体的最佳朝向。