To operate at a building scale, service robots must perform very long-horizon mobile manipulation tasks by navigating to different rooms, accessing different floors, and interacting with a wide and unseen range of everyday objects. We refer to these tasks as Building-wide Mobile Manipulation. To tackle these inherently long-horizon tasks, we introduce BUMBLE, a unified Vision-Language Model (VLM)-based framework integrating open-world RGBD perception, a wide spectrum of gross-to-fine motor skills, and dual-layered memory. Our extensive evaluation (90+ hours) indicates that BUMBLE outperforms multiple baselines in long-horizon building-wide tasks that require sequencing up to 12 ground truth skills spanning 15 minutes per trial. BUMBLE achieves 47.1% success rate averaged over 70 trials in different buildings, tasks, and scene layouts from different starting rooms and floors. Our user study demonstrates 22% higher satisfaction with our method than state-of-the-art mobile manipulation methods. Finally, we demonstrate the potential of using increasingly-capable foundation models to push performance further. For more information, see https://robin-lab.cs.utexas.edu/BUMBLE/
翻译:要在建筑尺度上运行,服务机器人必须执行极长视野的移动操作任务,包括导航至不同房间、抵达不同楼层,以及与广泛且未见过的日常物体进行交互。我们将此类任务称为建筑级移动操作。为应对这些本质上具有长视野特性的任务,我们提出了BUMBLE——一个基于视觉语言模型的统一框架,集成了开放世界的RGBD感知、从粗到精的多层次运动技能以及双层记忆系统。我们的大规模评估(超过90小时)表明,在需要按序执行多达12项真实技能(每次试验持续约15分钟)的长视野建筑级任务中,BUMBLE的表现优于多个基线方法。在不同建筑、任务和场景布局中,从不同起始房间和楼层进行的70次试验中,BUMBLE的平均成功率达到47.1%。我们的用户研究表明,相较于最先进的移动操作方法,本方法获得的用户满意度高出22%。最后,我们展示了利用能力持续增强的基础模型进一步提升性能的潜力。更多信息请访问:https://robin-lab.cs.utexas.edu/BUMBLE/