In this work, we study risk-aware reinforcement learning for quadrupedal locomotion. Our approach trains a family of risk-conditioned policies using a Conditional Value-at-Risk (CVaR) constrained policy optimization technique that provides improved stability and sample efficiency. At deployment, we adaptively select the best performing policy from the family of policies using a multi-armed bandit framework that uses only observed episodic returns, without any privileged environment information, and adapts to unknown conditions on the fly. Hence, we train quadrupedal locomotion policies at various levels of robustness using CVaR and adaptively select the desired level of robustness online to ensure performance in unknown environments. We evaluate our method in simulation across eight unseen settings (by changing dynamics, contacts, sensing noise, and terrain) and on a Unitree Go2 robot in previously unseen terrains. Our risk-aware policy attains nearly twice the mean and tail performance in unseen environments compared to other baselines and our bandit-based adaptation selects the best-performing risk-aware policy in unknown terrain within two minutes of operation.
翻译:本研究探讨了风险感知强化学习在四足机器人运动控制中的应用。我们采用条件风险价值约束的策略优化技术,训练出一系列风险条件策略,该方法提升了策略的稳定性和样本效率。在部署阶段,我们通过多臂老虎机框架自适应地从策略族中选择性能最优的策略,该框架仅依据观察到的回合回报进行决策,无需任何特权环境信息,并能实时适应未知条件。因此,我们利用条件风险价值训练了具有不同鲁棒性水平的四足运动策略,并在线上自适应选择所需的鲁棒性水平,以确保在未知环境中的性能表现。我们在仿真环境中通过八种未见设置(通过改变动力学特性、接触条件、感知噪声和地形)进行评估,并在Unitree Go2机器人上于先前未见地形中进行测试。与其它基线方法相比,我们的风险感知策略在未知环境中实现了近两倍的平均性能和尾部性能,且基于老虎机的自适应方法能在运行两分钟内为未知地形选择性能最优的风险感知策略。