Real-world fine-tuning of dexterous manipulation policies remains challenging due to limited real-world interaction budgets and highly multimodal action distributions. Diffusion-based policies, while expressive, do not permit conservative likelihood-based updates during fine-tuning because action probabilities are intractable. In contrast, conventional Gaussian policies collapse under multimodality, particularly when actions are executed in chunks, and standard per-step critics fail to align with chunked execution, leading to poor credit assignment. We present SOFT-FLOW, a sample-efficient off-policy fine-tuning framework with normalizing flow (NF) to address these challenges. The normalizing flow policy yields exact likelihoods for multimodal action chunks, allowing conservative, stable policy updates through likelihood regularization and thereby improving sample efficiency. An action-chunked critic evaluates entire action sequences, aligning value estimation with the policy's temporal structure and improving long-horizon credit assignment. To our knowledge, this is the first demonstration of a likelihood-based, multimodal generative policy combined with chunk-level value learning on real robotic hardware. We evaluate SOFT-FLOW on two challenging dexterous manipulation tasks in the real world: cutting tape with scissors retrieved from a case, and in-hand cube rotation with a palm-down grasp -- both of which require precise, dexterous control over long horizons. On these tasks, SOFT-FLOW achieves stable, sample-efficient adaptation where standard methods struggle.
翻译:现实世界中灵巧操作策略的微调仍面临挑战,主要受限于实际交互预算的匮乏以及高度多峰的动作分布。基于扩散的策略虽然表达能力强,但由于动作概率难以精确计算,无法在微调过程中进行基于保守似然的更新。相比之下,传统高斯策略在多峰分布下会失效,尤其在动作以分块形式执行时;而标准的逐步评价器无法与分块执行机制对齐,导致信用分配效果不佳。本文提出SOFT-FLOW,一种结合归一化流的样本高效离策略微调框架,以应对这些挑战。归一化流策略可为多峰动作分块提供精确似然,通过似然正则化实现保守且稳定的策略更新,从而提升样本效率。动作分块评价器能够评估完整动作序列,使价值估计与策略的时间结构对齐,并改善长时程信用分配。据我们所知,这是首次在真实机器人硬件上实现基于似然的多峰生成式策略与分块层级价值学习的结合。我们在两个具有挑战性的现实世界灵巧操作任务上评估SOFT-FLOW:从盒中取出剪刀裁剪胶带,以及采用掌心向下抓握方式的掌内立方体旋转——这两个任务均需在长时程内实现精确的灵巧控制。在这些任务中,SOFT-FLOW实现了稳定且样本高效的适应,而标准方法则难以取得良好效果。