Visual loco-manipulation of arbitrary objects in the wild with humanoid robots requires accurate end-effector (EE) control and a generalizable understanding of the scene via visual inputs (e.g., RGB-D images). Existing approaches are based on real-world imitation learning and exhibit limited generalization due to the difficulty in collecting large-scale training datasets. This paper presents a new paradigm, HERO, for object loco-manipulation with humanoid robots that combines the strong generalization and open-vocabulary understanding of large vision models with strong control performance from simulated training. We achieve this by designing an accurate residual-aware EE tracking policy. This EE tracking policy combines classical robotics with machine learning. It uses a) inverse kinematics to convert residual end-effector targets into reference trajectories, b) a learned neural forward model for accurate forward kinematics, c) goal adjustment, and d) replanning. Together, these innovations help us cut down the end-effector tracking error by 3.2x. We use this accurate end-effector tracker to build a modular system for loco-manipulation, where we use open-vocabulary large vision models for strong visual generalization. Our system is able to operate in diverse real-world environments, from offices to coffee shops, where the robot is able to reliably manipulate various everyday objects (e.g., mugs, apples, toys) on surfaces ranging from 43cm to 92cm in height. Systematic modular and end-to-end tests in simulation and the real world demonstrate the effectiveness of our proposed design. We believe the advances in this paper can open up new ways of training humanoid robots to interact with daily objects.
翻译:人形机器人在非结构化环境中对任意物体进行视觉移动操作,需要精确的末端执行器控制以及通过视觉输入(如RGB-D图像)对场景具备可泛化的理解能力。现有方法主要基于真实世界的模仿学习,由于大规模训练数据收集困难,其泛化能力有限。本文提出了一种面向人形机器人物体移动操作的新范式HERO,该范式将大型视觉模型强大的泛化能力与开放词汇理解能力,与仿真训练所获得的精确控制性能相结合。我们通过设计一种精确的残差感知末端执行器跟踪策略来实现这一目标。该末端执行器跟踪策略融合了经典机器人技术与机器学习方法,具体包括:a) 利用逆运动学将残差末端执行器目标转换为参考轨迹,b) 使用学习的神经前向模型实现精确的正向运动学,c) 目标调整,以及d) 重规划。这些创新共同帮助我们将末端执行器的跟踪误差降低了3.2倍。我们利用这一精确的末端执行器跟踪器构建了一个用于移动操作的模块化系统,其中我们使用开放词汇大型视觉模型来实现强大的视觉泛化能力。我们的系统能够在多样化的真实世界环境中运行,从办公室到咖啡店,机器人能够可靠地操控各种日常物体(如马克杯、苹果、玩具),操作台面高度范围从43厘米到92厘米。在仿真和真实环境中进行的系统性模块化测试与端到端测试,证明了我们所提出设计的有效性。我们相信本文的进展能够为训练人形机器人与日常物体交互开辟新的途径。