The pretraining-finetuning paradigm has facilitated numerous transformative advancements in artificial intelligence research in recent years. However, in the domain of reinforcement learning (RL) for robot motion control, individual skills are often learned from scratch despite the high likelihood that some generalizable knowledge is shared across all task-specific policies belonging to a single robot embodiment. This work aims to define a paradigm for pretraining neural network models that encapsulate such knowledge and can subsequently serve as a basis for warm-starting the RL process in classic actor-critic algorithms, such as Proximal Policy Optimization (PPO). We begin with a task-agnostic exploration-based data collection algorithm to gather diverse, dynamic transition data, which is then used to train a Proprioceptive Inverse Dynamics Model (PIDM) through supervised learning. The pretrained weights are loaded into both the actor and critic networks to warm-start the policy optimization of actual tasks. We systematically validated our proposed method on seven distinct robot motion control tasks, showing significant benefits to this initialization strategy. Our proposed approach on average improves sample efficiency by 40.1% and task performance by 7.5%, compared to random initialization. We further present key ablation studies and empirical analyses that shed light on the mechanisms behind the effectiveness of our method.
翻译:预训练-微调范式近年来推动了人工智能研究领域的诸多变革性进展。然而,在机器人运动控制的强化学习领域,尽管属于同一机器人具身的各项任务策略很可能共享某些可泛化知识,现有方法通常仍从零开始学习各项独立技能。本研究旨在构建一种神经网络预训练范式,使其能够封装此类知识,并可作为经典演员-评论家算法(如近端策略优化)中强化学习过程热启动的基础。我们首先采用任务无关的探索式数据收集算法来获取多样化的动态转移数据,随后通过监督学习训练本体感知逆动力学模型。将预训练权重载入演员网络与评论家网络后,即可为实际任务的策略优化提供热启动。我们在七种不同的机器人运动控制任务上系统验证了所提方法,结果表明该初始化策略具有显著优势。与随机初始化相比,我们提出的方法平均提升样本效率40.1%,任务性能7.5%。我们进一步通过关键消融实验与实证分析,揭示了该方法有效性背后的作用机制。