Building robots capable of interacting with humans through natural language in the visual world presents a significant challenge in the field of robotics. To overcome this challenge, Embodied Question Answering (EQA) has been proposed as a benchmark task to measure the ability to identify an object navigating through a previously unseen environment in response to human-posed questions. Although some methods have been proposed, their evaluations have been limited to simulations, without experiments in real-world scenarios. Furthermore, all of these methods are constrained by a limited vocabulary for question-and-answer interactions, making them unsuitable for practical applications. In this work, we propose a map-based modular EQA method that enables real robots to navigate unknown environments through frontier-based map creation and address unknown QA pairs using foundation models that support open vocabulary. Unlike the questions of the previous EQA dataset on Matterport 3D (MP3D), questions in our real-world experiments contain various question formats and vocabularies not included in the training data. We conduct comprehensive experiments on virtual environments (MP3D-EQA) and two real-world house environments and demonstrate that our method can perform EQA even in the real world.
翻译:构建能够通过自然语言在视觉世界中与人类互动机器人,是机器人学领域的一项重大挑战。为应对这一挑战,具身问答被提出作为基准任务,用于评估机器人在先前未见环境中通过导航识别物体以回答人类提出问题的能力。尽管已有一些方法被提出,但其评估仅限于仿真环境,未在真实场景中进行实验。此外,这些方法均受限于问答交互的有限词汇表,使其难以适用于实际应用。本研究提出一种基于地图的模块化具身问答方法,该方法通过基于前沿的地图构建使真实机器人能够在未知环境中导航,并利用支持开放词汇的基础模型处理未知问答对。与先前基于Matterport 3D的具身问答数据集中的问题不同,我们在真实世界实验中的问题包含训练数据未涵盖的多样化提问形式与词汇。我们在虚拟环境与两个真实住宅环境中进行了全面实验,证明本方法即使在真实世界中也能有效执行具身问答任务。