Multi-modal distribution in robotic manipulation action sequences poses critical challenges for imitation learning. To this end, existing approaches often model the action space as either a discrete set of tokens or a continuous, latent-variable distribution. However, both approaches present trade-offs: some methods discretize actions into tokens and therefore lose fine-grained action variations, while others generate continuous actions in a single stage tend to produce unstable mode transitions. To address these limitations, we propose Primary-Fine Decoupling for Action Generation (PF-DAG), a two-stage framework that decouples coarse action consistency from fine-grained variations. First, we compress action chunks into a small set of discrete modes, enabling a lightweight policy to select consistent coarse modes and avoid mode bouncing. Second, a mode conditioned MeanFlow policy is learned to generate high-fidelity continuous actions. Theoretically, we prove PF-DAG's two-stage design achieves a strictly lower MSE bound than single-stage generative policies. Empirically, PF-DAG outperforms state-of-the-art baselines across 56 tasks from Adroit, DexArt, and MetaWorld benchmarks. It further generalizes to real-world tactile dexterous manipulation tasks. Our work demonstrates that explicit mode-level decoupling enables both robust multi-modal modeling and reactive closed-loop control for robotic manipulation.
翻译:机器人操作动作序列中的多模态分布为模仿学习带来了关键挑战。为此,现有方法通常将动作空间建模为离散的标记集合或连续的隐变量分布。然而,这两种方法均存在权衡:一些方法将动作离散化为标记,从而丢失了细粒度的动作变化;而另一些方法在单阶段生成连续动作,往往产生不稳定的模态转换。为应对这些局限性,我们提出了用于动作生成的主细解耦框架,这是一个两阶段框架,将粗粒度动作一致性与细粒度变化解耦。首先,我们将动作块压缩为一小组离散模态,使轻量级策略能够选择一致的粗粒度模态并避免模态跳跃。其次,学习一个模态条件化的MeanFlow策略以生成高保真度的连续动作。理论上,我们证明了PF-DAG的两阶段设计实现了比单阶段生成策略严格更低的均方误差下界。实证上,PF-DAG在Adroit、DexArt和MetaWorld基准测试的56项任务中均优于最先进的基线方法。它进一步推广到了真实世界的触觉灵巧操作任务。我们的工作表明,显式的模态级解耦能够同时实现鲁棒的多模态建模和机器人操作的响应式闭环控制。