Precise robot manipulation is critical for fine-grained applications such as chemical and biological experiments, where even small errors (e.g., reagent spillage) can invalidate an entire task. Existing approaches often rely on pre-collected expert demonstrations and train policies via imitation learning (IL) or offline reinforcement learning (RL). However, obtaining high-quality demonstrations for precision tasks is difficult and time-consuming, while offline RL commonly suffers from distribution shifts and low data efficiency. We introduce a Role-Model Reinforcement Learning (RM-RL) framework that unifies online and offline training in real-world environments. The key idea is a role-model strategy that automatically generates labels for online training data using approximately optimal actions, eliminating the need for human demonstrations. RM-RL reformulates policy learning as supervised training, reducing instability from distribution mismatch and improving efficiency. A hybrid training scheme further leverages online role-model data for offline reuse, enhancing data efficiency through repeated sampling. Extensive experiments show that RM-RL converges faster and more stably than existing RL methods, yielding significant gains in real-world manipulation: 53% improvement in translation accuracy and 20% in rotation accuracy. Finally, we demonstrate the successful execution of a challenging task, precisely placing a cell plate onto a shelf, highlighting the framework's effectiveness where prior methods fail.
翻译:精确的机器人操作对于化学与生物实验等细粒度应用至关重要,其中即使微小的误差(如试剂溅洒)也可能导致整个任务失效。现有方法通常依赖于预先采集的专家示范,并通过模仿学习(IL)或离线强化学习(RL)训练策略。然而,为精确任务获取高质量的示范既困难又耗时,而离线RL常受分布偏移和数据效率低下的困扰。我们提出了一种角色模型强化学习(RM-RL)框架,该框架在真实环境中统一了在线与离线训练。其核心思想是一种角色模型策略,该策略利用近似最优动作为在线训练数据自动生成标签,从而无需人工示范。RM-RL将策略学习重新表述为监督训练,减少了因分布不匹配导致的不稳定性,并提升了效率。一种混合训练方案进一步利用在线角色模型数据进行离线复用,通过重复采样提高了数据效率。大量实验表明,RM-RL比现有RL方法收敛更快、更稳定,在真实世界操作中取得了显著提升:平移精度提高53%,旋转精度提高20%。最后,我们展示了在具有挑战性的任务——将细胞培养板精确放置到层架上的成功执行,突显了该框架在先验方法失效场景下的有效性。