Diffusion-based policies have achieved remarkable results in robotic manipulation but often struggle to adapt rapidly in dynamic scenarios, leading to delayed responses or task failures. We present DCDP, a Dynamic Closed-Loop Diffusion Policy framework that integrates chunk-based action generation with real-time correction. DCDP integrates a self-supervised dynamic feature encoder, cross-attention fusion, and an asymmetric action encoder-decoder to inject environmental dynamics before action execution, achieving real-time closed-loop action correction and enhancing the system's adaptability in dynamic scenarios. In dynamic PushT simulations, DCDP improves adaptability by 19\% without retraining while requiring only 5\% additional computation. Its modular design enables plug-and-play integration, achieving both temporal coherence and real-time responsiveness in dynamic robotic scenarios, including real-world manipulation tasks. The project page is at: https://github.com/wupengyuan/dcdp
翻译:基于扩散的策略在机器人操作中已取得显著成果,但通常在动态场景中难以快速适应,导致响应延迟或任务失败。本文提出DCDP,一种动态闭环扩散策略框架,将分块动作生成与实时校正相结合。DCDP集成自监督动态特征编码器、交叉注意力融合机制以及非对称动作编码器-解码器,在动作执行前注入环境动态信息,实现实时闭环动作校正,从而提升系统在动态场景中的适应能力。在动态PushT仿真实验中,DCDP在不重新训练的情况下将适应能力提升19%,同时仅需增加5%的计算开销。其模块化设计支持即插即用集成,在包括真实世界操作任务在内的动态机器人场景中,实现了时序连贯性与实时响应性的统一。项目页面位于:https://github.com/wupengyuan/dcdp