Diffusion models have been shown to excel in robotic imitation learning by mastering the challenge of modeling complex distributions. However, sampling speed has traditionally not been a priority due to their popularity for image generation, limiting their application to dynamical tasks. While recent work has improved the sampling speed of diffusion-based robotic policies, they are restricted to techniques from the image generation domain. We adapt Temporally Entangled Diffusion (TEDi), a framework specific for trajectory generation, to speed up diffusion-based policies for imitation learning. We introduce TEDi Policy, with novel regimes for training and sampling, and show that it drastically improves the sampling speed while remaining performant when applied to state-of-the-art diffusion-based imitation learning policies.
翻译:扩散模型已证明在机器人模仿学习中表现出色,能够有效应对复杂分布建模的挑战。然而,由于其在图像生成领域的广泛应用,采样速度传统上并非优先考量,这限制了其在动态任务中的应用。尽管近期研究提升了基于扩散的机器人策略的采样速度,但这些方法仍局限于图像生成领域的技术。我们采用专为轨迹生成设计的时间纠缠扩散框架,以加速基于扩散的模仿学习策略。我们提出TEDi策略,引入创新的训练与采样机制,并证明该策略在应用于最先进的基于扩散的模仿学习策略时,能在保持高性能的同时显著提升采样速度。