Pre-training robot policies with a rich set of skills can substantially accelerate the learning of downstream tasks. Prior works have defined pre-training tasks via natural language instructions, but doing so requires tedious human annotation of hundreds of thousands of instructions. Thus, we propose SPRINT, a scalable offline policy pre-training approach which substantially reduces the human effort needed for pre-training a diverse set of skills. Our method uses two core ideas to automatically expand a base set of pre-training tasks: instruction relabeling via large language models and cross-trajectory skill chaining through offline reinforcement learning. As a result, SPRINT pre-training equips robots with a much richer repertoire of skills. Experimental results in a household simulator and on a real robot kitchen manipulation task show that SPRINT leads to substantially faster learning of new long-horizon tasks than previous pre-training approaches. Website at https://clvrai.com/sprint.
翻译:利用包含丰富技能集的机器人策略预训练可大幅加速下游任务的学习。现有工作通过自然语言指令定义预训练任务,但此类方法需要人工标注数十万条指令。为此,我们提出SPRINT——一种可扩展的离线策略预训练方法,该方法通过两大核心创新自动扩展基础预训练任务集:基于大语言模型的指令重标注技术,以及通过离线强化学习实现的跨轨迹技能链式组合。实验表明,在家庭模拟器和真实机器人厨房操作任务中,相较于现有预训练方法,采用SPRINT可使机器人新长时域任务的学习速度显著提升。详情见网站https://clvrai.com/sprint。