When performing tasks like laundry, humans naturally coordinate both hands to manipulate objects and anticipate how their actions will change the state of the clothes. However, achieving such coordination in robotics remains challenging due to the need to model object movement, predict future states, and generate precise bimanual actions. In this work, we address these challenges by infusing the predictive nature of human manipulation strategies into robot imitation learning. Specifically, we disentangle task-related state transitions from agent-specific inverse dynamics modeling to enable effective bimanual coordination. Using a demonstration dataset, we train a diffusion model to predict future states given historical observations, envisioning how the scene evolves. Then, we use an inverse dynamics model to compute robot actions that achieve the predicted states. Our key insight is that modeling object movement can help learning policies for bimanual coordination manipulation tasks. Evaluating our framework across diverse simulation and real-world manipulation setups, including multimodal goal configurations, bimanual manipulation, deformable objects, and multi-object setups, we find that it consistently outperforms state-of-the-art state-to-action mapping policies. Our method demonstrates a remarkable capacity to navigate multimodal goal configurations and action distributions, maintain stability across different control modes, and synthesize a broader range of behaviors than those present in the demonstration dataset.
翻译:在执行洗衣等任务时,人类会自然地协调双手来操控物体,并预判自身动作将如何改变衣物的状态。然而,在机器人领域实现这种协调性仍具挑战性,因为需要建模物体运动、预测未来状态并生成精确的双臂动作。本研究通过将人类操作策略的预测特性融入机器人模仿学习,以应对这些挑战。具体而言,我们将任务相关的状态转移与智能体特定的逆动力学建模解耦,以实现有效的双臂协调。利用演示数据集,我们训练了一个扩散模型,使其能够根据历史观测预测未来状态,从而推演场景的演变过程。随后,我们使用逆动力学模型计算机器人动作以实现预测状态。我们的核心见解是:对物体运动进行建模有助于学习双臂协调操作任务的策略。通过在多样化仿真与真实世界操作场景(包括多模态目标配置、双臂操作、可变形物体及多物体场景)中对框架进行评估,我们发现其性能始终优于最先进的“状态-动作”映射策略。本方法展现出卓越的能力:能够处理多模态目标配置与动作分布,在不同控制模式下保持稳定性,并能合成比演示数据集中更广泛的行为范围。