We present Dynamics-Guided Diffusion Model (DGDM), a data-driven framework for generating task-specific manipulator designs without task-specific training. Given object shapes and task specifications, DGDM generates sensor-less manipulator designs that can blindly manipulate objects towards desired motions and poses using an open-loop parallel motion. This framework 1) flexibly represents manipulation tasks as interaction profiles, 2) represents the design space using a geometric diffusion model, and 3) efficiently searches this design space using the gradients provided by a dynamics network trained without any task information. We evaluate DGDM on various manipulation tasks ranging from shifting/rotating objects to converging objects to a specific pose. Our generated designs outperform optimization-based and unguided diffusion baselines relatively by 31.5% and 45.3% on average success rate. With the ability to generate a new design within 0.8s, DGDM facilitates rapid design iteration and enhances the adoption of data-driven approaches for robot mechanism design. Qualitative results are best viewed on our project website https://dgdm-robot.github.io/.
翻译:我们提出了动力学引导的扩散模型(DGDM),这是一种数据驱动的框架,用于生成特定任务的操作器设计,而无需进行任务特定的训练。给定物体形状和任务规范,DGDM能够生成无传感器的操作器设计,这些设计可以通过开环平行运动,在无视觉反馈的情况下将物体操纵至期望的运动和姿态。该框架具有以下特点:1)灵活地将操作任务表示为交互轮廓;2)使用几何扩散模型表示设计空间;3)利用一个无需任何任务信息训练的动力学网络提供的梯度,高效地搜索该设计空间。我们在多种操作任务上评估DGDM,包括平移/旋转物体以及将物体收敛至特定姿态。我们生成的设计在平均成功率上分别比基于优化的基准方法和无引导的扩散基准方法相对高出31.5%和45.3%。DGDM能够在0.8秒内生成一个新设计,这促进了快速设计迭代,并增强了数据驱动方法在机器人机构设计中的应用。定性结果请在我们的项目网站https://dgdm-robot.github.io/上查看。