The pretraining-finetuning paradigm has facilitated numerous transformative advancements in artificial intelligence research in recent years. However, in the domain of reinforcement learning (RL) for robot locomotion, individual skills are often learned from scratch despite the high likelihood that some generalizable knowledge is shared across all task-specific policies belonging to the same robot embodiment. This work aims to define a paradigm for pretraining neural network models that encapsulate such knowledge and can subsequently serve as a basis for warm-starting the RL process in classic actor-critic algorithms, such as Proximal Policy Optimization (PPO). We begin with a task-agnostic exploration-based data collection algorithm to gather diverse, dynamic transition data, which is then used to train a Proprioceptive Inverse Dynamics Model (PIDM) through supervised learning. The pretrained weights are then loaded into both the actor and critic networks to warm-start the policy optimization of actual tasks. We systematically validated our proposed method with 9 distinct robot locomotion RL environments comprising 3 different robot embodiments, showing significant benefits of this initialization strategy. Our proposed approach on average improves sample efficiency by 36.9% and task performance by 7.3% compared to random initialization. We further present key ablation studies and empirical analyses that shed light on the mechanisms behind the effectiveness of this method.
翻译:预训练-微调范式近年来推动了人工智能研究领域的诸多变革性进展。然而,在机器人运动控制的强化学习领域,尽管属于同一机器人具身的任务特定策略很可能共享某些可泛化知识,个体技能通常仍需从头开始学习。本研究旨在建立一种预训练神经网络模型的范式,该模型能够封装此类知识,并可作为经典行动者-评论家算法(如近端策略优化)中强化学习过程热启动的基础。我们首先采用任务无关的探索式数据收集算法来获取多样化的动态转移数据,随后通过监督学习训练本体感知逆动力学模型。预训练权重随后被载入行动者与评论家网络,以热启动实际任务的策略优化。我们在包含3种不同机器人具身的9个独立机器人运动强化学习环境中系统验证了所提方法,证明了该初始化策略的显著优势。与随机初始化相比,我们提出的方法平均提升样本效率36.9%,任务性能7.3%。我们进一步提供了关键消融研究与实证分析,揭示了该方法有效性背后的机制。