Alongside optimization-based planners, sampling-based approaches are often used in trajectory planning for autonomous driving due to their simplicity. Model predictive path integral control is a framework that builds upon optimization principles while incorporating stochastic sampling of input trajectories. This paper investigates several sampling approaches for trajectory generation. In this context, normalizing flows originating from the field of variational inference are considered for the generation of sampling distributions, as they model transformations of simple to more complex distributions. Accordingly, learning-based normalizing flow models are trained for a more efficient exploration of the input domain for the task at hand. The developed algorithm and the proposed sampling distributions are evaluated in two simulation scenarios.
翻译:在自动驾驶轨迹规划领域,除基于优化的规划器外,基于采样的方法因其简便性而被广泛采用。模型预测路径积分控制是一种在优化原理基础上结合输入轨迹随机采样的框架。本文研究了多种用于轨迹生成的采样方法。在此背景下,源自变分推断领域的归一化流被考虑用于生成采样分布,因其能够建模从简单分布到复杂分布的变换过程。据此,本文训练了基于学习的归一化流模型,以更高效地探索当前任务所需的输入域。所开发的算法与提出的采样分布将在两种仿真场景中进行评估。