This paper proposes a new design method for a stochastic control policy using a normalizing flow (NF). In reinforcement learning (RL), the policy is usually modeled as a distribution model with trainable parameters. When this parameterization has less expressiveness, it would fail to acquiring the optimal policy. A mixture model has capability of a universal approximation, but it with too much redundancy increases the computational cost, which can become a bottleneck when considering the use of real-time robot control. As another approach, NF, which is with additional parameters for invertible transformation from a simple stochastic model as a base, is expected to exert high expressiveness and lower computational cost. However, NF cannot compute its mean analytically due to complexity of the invertible transformation, and it lacks reliability because it retains stochastic behaviors after deployment for robot controller. This paper therefore designs a restricted NF (RNF) that achieves an analytic mean by appropriately restricting the invertible transformation. In addition, the expressiveness impaired by this restriction is regained using bimodal student-t distribution as its base, so-called Bit-RNF. In RL benchmarks, Bit-RNF policy outperformed the previous models. Finally, a real robot experiment demonstrated the applicability of Bit-RNF policy to real world. The attached video is uploaded on youtube: https://youtu.be/R_GJVZDW9bk
翻译:本文提出了一种使用归一化流(NF)设计随机控制策略的新方法。在强化学习(RL)中,策略通常被建模为具有可训练参数的分布模型。当这种参数化的表达能力不足时,将无法获得最优策略。混合模型具有通用逼近能力,但其冗余度过高会增加计算成本,在考虑用于实时机器人控制时可能成为瓶颈。作为另一种方法,NF通过额外参数实现从简单随机基模型的可逆变换,有望兼具高表达能力和较低计算成本。然而,由于可逆变换的复杂性,NF无法解析计算其均值,并且由于在部署为机器人控制器后仍保留随机行为,其可靠性不足。因此,本文设计了一种受限归一化流(RNF),通过适当限制可逆变换来实现解析均值计算。此外,利用双模态学生t分布作为基分布(称为Bit-RNF)来弥补因限制而受损的表达能力。在RL基准测试中,Bit-RNF策略优于先前模型。最后,真实机器人实验证明了Bit-RNF策略在现实世界中的适用性。附带的视频已上传至YouTube:https://youtu.be/R_GJVZDW9bk