In this work, we introduce a planning neural operator (PNO) for predicting the value function of a motion planning problem. We recast value function approximation as learning a single operator from the cost function space to the value function space, which is defined by an Eikonal partial differential equation (PDE). Specifically, we recast computing value functions as learning a single operator across continuous function spaces which prove is equivalent to solving an Eikonal PDE. Through this reformulation, our learned PNO is able to generalize to new motion planning problems without retraining. Therefore, our PNO model, despite being trained with a finite number of samples at coarse resolution, inherits the zero-shot super-resolution property of neural operators. We demonstrate accurate value function approximation at 16 times the training resolution on the MovingAI lab's 2D city dataset and compare with state-of-the-art neural value function predictors on 3D scenes from the iGibson building dataset. Lastly, we investigate employing the value function output of PNO as a heuristic function to accelerate motion planning. We show theoretically that the PNO heuristic is $\epsilon$-consistent by introducing an inductive bias layer that guarantees our value functions satisfy the triangle inequality. With our heuristic, we achieve a 30% decrease in nodes visited while obtaining near optimal path lengths on the MovingAI lab 2D city dataset, compared to classical planning methods (A*, RRT*).
翻译:本文提出了一种用于预测运动规划问题价值函数的规划神经算子(PNO)。我们将价值函数近似问题重新定义为从成本函数空间到价值函数空间学习一个单一算子,该算子由一个Eikonal偏微分方程(PDE)定义。具体而言,我们将计算价值函数重新表述为在连续函数空间上学习一个单一算子,并证明其等价于求解一个Eikonal PDE。通过此重构,我们学习到的PNO能够泛化到新的运动规划问题而无需重新训练。因此,尽管我们的PNO模型是在粗分辨率下使用有限数量的样本进行训练的,它仍继承了神经算子的零样本超分辨率特性。我们在MovingAI实验室的2D城市数据集上展示了在训练分辨率16倍下的精确价值函数近似,并在iGibson建筑数据集的3D场景上与最先进的神经价值函数预测器进行了比较。最后,我们研究了将PNO输出的价值函数用作启发式函数以加速运动规划。我们通过引入一个归纳偏置层来保证我们的价值函数满足三角不等式,从而从理论上证明了PNO启发式函数是$\epsilon$-一致的。使用我们的启发式函数,与经典规划方法(A*、RRT*)相比,我们在MovingAI实验室2D城市数据集上实现了访问节点数减少30%,同时获得接近最优的路径长度。