Diffusion models have demonstrated their capabilities in modeling trajectories of multi-tasks. However, existing multi-task planners or policies typically rely on task-specific demonstrations via multi-task imitation, or require task-specific reward labels to facilitate policy optimization via Reinforcement Learning (RL). To address these challenges, we aim to develop a versatile diffusion planner that can leverage large-scale inferior data that contains task-agnostic sub-optimal trajectories, with the ability to fast adapt to specific tasks. In this paper, we propose \textbf{SODP}, a two-stage framework that leverages \textbf{S}ub-\textbf{O}ptimal data to learn a \textbf{D}iffusion \textbf{P}lanner, which is generalizable for various downstream tasks. Specifically, in the pre-training stage, we train a foundation diffusion planner that extracts general planning capabilities by modeling the versatile distribution of multi-task trajectories, which can be sub-optimal and has wide data coverage. Then for downstream tasks, we adopt RL-based fine-tuning with task-specific rewards to fast refine the diffusion planner, which aims to generate action sequences with higher task-specific returns. Experimental results from multi-task domains including Meta-World and Adroit demonstrate that SODP outperforms state-of-the-art methods with only a small amount of data for reward-guided fine-tuning.
翻译:扩散模型已在多任务轨迹建模中展现出卓越能力。然而,现有多任务规划器或策略通常依赖多任务模仿的任务特定演示,或需要任务特定奖励标签通过强化学习促进策略优化。为应对这些挑战,本研究致力于开发一种通用扩散规划器,能够利用包含任务无关次优轨迹的大规模低质量数据,并具备快速适应特定任务的能力。本文提出\textbf{SODP}——一个两阶段框架,通过利用\textbf{次优}数据学习适用于多种下游任务的\textbf{扩散规划器}。具体而言,在预训练阶段,我们通过建模多任务轨迹的通用分布(允许次优性但具有广泛数据覆盖)来训练基础扩散规划器,以提取通用规划能力。针对下游任务,我们采用基于强化学习的任务特定奖励微调方法,快速优化扩散规划器以生成具有更高任务特定回报的动作序列。在Meta-World和Adroit等多任务领域的实验结果表明,SODP仅需少量奖励引导微调数据即可超越现有最优方法。