We present DiPPeST, a novel image and goal conditioned diffusion-based trajectory generator for quadrupedal robot path planning. DiPPeST is a zero-shot adaptation of our previously introduced diffusion-based 2D global trajectory generator (DiPPeR). The introduced system incorporates a novel strategy for local real-time path refinements, that is reactive to camera input, without requiring any further training, image processing, or environment interpretation techniques. DiPPeST achieves 92% success rate in obstacle avoidance for nominal environments and an average of 88% success rate when tested in environments that are up to 3.5 times more complex in pixel variation than DiPPeR. A visual-servoing framework is developed to allow for real-world execution, tested on the quadruped robot, achieving 80% success rate in different environments and showcasing improved behavior than complex state-of-the-art local planners, in narrow environments.
翻译:本文提出DiPPeST,一种新颖的基于图像和目标条件的扩散式轨迹生成器,用于四足机器人路径规划。DiPPeST是我们先前提出的基于扩散的二维全局轨迹生成器(DiPPeR)的零样本适配版本。该系统引入了一种创新的局部实时路径优化策略,该策略能够响应相机输入,且无需任何额外训练、图像处理或环境解析技术。在标准测试环境中,DiPPeST实现了92%的避障成功率;在像素变化复杂度高达DiPPeR测试环境3.5倍的情况下,仍保持平均88%的成功率。我们开发了视觉伺服框架以实现真实世界部署,在四足机器人平台上进行测试后,该系统在不同环境中达到80%的成功率,并在狭窄环境中展现出比复杂前沿局部规划器更优的行为表现。