Imagine a future when we can Zoom-call a robot to manage household chores remotely. This work takes one step in this direction. Robi Butler is a new household robot assistant that enables seamless multimodal remote interaction. It allows the human user to monitor its environment from a first-person view, issue voice or text commands, and specify target objects through hand-pointing gestures. At its core, a high-level behavior module, powered by Large Language Models (LLMs), interprets multimodal instructions to generate multistep action plans. Each plan consists of open-vocabulary primitives supported by vision-language models, enabling the robot to process both textual and gestural inputs. Zoom provides a convenient interface to implement remote interactions between the human and the robot. The integration of these components allows Robi Butler to ground remote multimodal instructions in real-world home environments in a zero-shot manner. We evaluated the system on various household tasks, demonstrating its ability to execute complex user commands with multimodal inputs. We also conducted a user study to examine how multimodal interaction influences user experiences in remote human-robot interaction. These results suggest that with the advances in robot foundation models, we are moving closer to the reality of remote household robot assistants.
翻译:想象一下,我们可以通过Zoom呼叫机器人来远程管理家务的未来。这项工作朝此方向迈出了一步。Robi Butler是一种新型家用机器人助手,能够实现无缝的多模态远程交互。它允许人类用户以第一人称视角监控其环境,通过语音或文本下达指令,并通过手指指向手势指定目标物体。其核心是一个由大语言模型驱动的高级行为模块,该模块解析多模态指令以生成多步骤行动规划。每个规划由视觉-语言模型支持的开放词汇基元组成,使机器人能够处理文本和手势输入。Zoom为实现人与机器人之间的远程交互提供了便捷的接口。这些组件的集成使得Robi Butler能够以零样本方式,在真实家庭环境中落实远程多模态指令。我们在多种家务任务上评估了该系统,证明了其通过多模态输入执行复杂用户指令的能力。我们还进行了一项用户研究,以考察多模态交互如何影响远程人机交互中的用户体验。这些结果表明,随着机器人基础模型的进步,我们正日益接近远程家用机器人助手的现实。