Classical methods in robot motion planning, such as sampling-based and optimization-based methods, often struggle with scalability towards higher-dimensional state spaces and complex environments. Diffusion models, known for their capability to learn complex, high-dimensional and multi-modal data distributions, provide a promising alternative when applied to motion planning problems and have already shown interesting results. However, most of the current approaches train their model for a single environment, limiting their generalization to environments not seen during training. The techniques that do train a model for multiple environments rely on a specific camera to provide the model with the necessary environmental information and therefore always require that sensor. To effectively adapt to diverse scenarios without the need for retraining, this research proposes Context-Aware Motion Planning Diffusion (CAMPD). CAMPD leverages a classifier-free denoising probabilistic diffusion model, conditioned on sensor-agnostic contextual information. An attention mechanism, integrated in the well-known U-Net architecture, conditions the model on an arbitrary number of contextual parameters. CAMPD is evaluated on a 7-DoF robot manipulator and benchmarked against state-of-the-art approaches on real-world tasks, showing its ability to generalize to unseen environments and generate high-quality, multi-modal trajectories, at a fraction of the time required by existing methods.
翻译:机器人运动规划中的经典方法,如基于采样的方法和基于优化的方法,在处理高维状态空间和复杂环境时通常面临可扩展性挑战。扩散模型以其学习复杂、高维和多模态数据分布的能力而闻名,为运动规划问题提供了一种有前景的替代方案,并已展现出令人瞩目的成果。然而,当前大多数方法仅针对单一环境训练模型,限制了其对训练过程中未见环境的泛化能力。那些针对多个环境训练模型的技术,通常依赖于特定摄像头为模型提供必要的环境信息,因此始终需要该传感器。为了有效适应多样化场景而无需重新训练,本研究提出了上下文感知运动规划扩散模型。CAMPD利用一种无分类器去噪概率扩散模型,该模型以传感器无关的上下文信息为条件。一个集成在知名U-Net架构中的注意力机制,使模型能够以任意数量的上下文参数为条件。CAMPD在一个7自由度机器人机械臂上进行了评估,并在真实世界任务中与最先进的方法进行了基准测试,结果表明其能够泛化到未见环境,并以现有方法所需时间的一小部分生成高质量的多模态轨迹。