The current state-of-the-art in quadruped locomotion is able to produce robust motion for terrain traversal but requires the segmentation of a desired robot trajectory into a discrete set of locomotion skills such as trot and crawl. In contrast, in this work we demonstrate the feasibility of learning a single, unified representation for quadruped locomotion enabling continuous blending between gait types and characteristics. We present Gaitor, which learns a disentangled representation of locomotion skills, thereby sharing information common to all gait types seen during training. The structure emerging in the learnt representation is interpretable in that it is found to encode phase correlations between the different gait types. These can be leveraged to produce continuous gait transitions. In addition, foot swing characteristics are disentangled and directly addressable. Together with a rudimentary terrain encoding and a learned planner operating in this structured latent representation, Gaitor is able to take motion commands including desired gait type and characteristics from a user while reacting to uneven terrain. We evaluate Gaitor in both simulated and real-world settings on the ANYmal C platform. To the best of our knowledge, this is the first work learning such a unified and interpretable latent representation for multiple gaits, resulting in on-demand continuous blending between different locomotion modes on a real quadruped robot.
翻译:当前四足机器人运动领域的最先进技术能够为地形穿越生成鲁棒的运动轨迹,但需要将期望的机器人轨迹分割为离散的运动技能集合,例如小跑和爬行。相比之下,本工作证明了学习单一、统一的四足运动表示的可行性,该表示能够实现不同步态类型与特征间的连续混合。我们提出Gaitor,它学习解耦的运动技能表示,从而共享训练期间所见所有步态类型的共同信息。所学表示中涌现的结构具有可解释性,研究发现它能编码不同步态类型间的相位关联。这些关联可用于产生连续的步态转换。此外,足端摆动特征被解耦并可直接寻址。结合基础的地形编码与在该结构化潜表示中运行的习得规划器,Gaitor能够接收来自用户的运动指令(包括期望步态类型与特征),同时对崎岖地形作出反应。我们在ANYmal C平台上通过仿真与真实环境对Gaitor进行评估。据我们所知,这是首个学习此类面向多种步态的统一可解释潜表示的工作,实现了真实四足机器人上不同运动模式的按需连续混合。