Human-like autonomous driving controllers have the potential to enhance passenger perception of autonomous vehicles. This paper proposes DriViDOC: a model for Driving from Vision through Differentiable Optimal Control, and its application to learn personalized autonomous driving controllers from human demonstrations. DriViDOC combines the automatic inference of relevant features from camera frames with the properties of nonlinear model predictive control (NMPC), such as constraint satisfaction. Our approach leverages the differentiability of parametric NMPC, allowing for end-to-end learning of the driving model from images to control. The model is trained on an offline dataset comprising various driving styles collected on a motion-base driving simulator. During online testing, the model demonstrates successful imitation of different driving styles, and the interpreted NMPC parameters provide insights into the achievement of specific driving behaviors. Our experimental results show that DriViDOC outperforms other methods involving NMPC and neural networks, exhibiting an average improvement of 20% in imitation scores.
翻译:类人自动驾驶控制器有望提升乘客对自动驾驶车辆的感知。本文提出DriViDOC模型(基于视觉的可微分最优控制驾驶模型),并将其应用于从人类示教中学习个性化自动驾驶控制器。DriViDOC将摄像头帧中相关特征的自动推断与非线性模型预测控制(NMPC)的约束满足特性相结合。该方法利用参数化NMPC的可微分性,实现从图像到控制的驾驶模型端到端学习。模型在包含运动基座驾驶模拟器采集的多种驾驶风格的离线数据集上训练。在线测试中,模型成功模仿不同驾驶风格,且可解释的NMPC参数揭示了特定驾驶行为的实现机理。实验结果表明,DriViDOC在模仿得分上平均提升20%,优于其他结合NMPC与神经网络的方法。