The choice of representation plays a key role in self-driving. Bird's eye view (BEV) representations have shown remarkable performance in recent years. In this paper, we propose to learn object-centric representations in BEV to distill a complex scene into more actionable information for self-driving. We first learn to place objects into slots with a slot attention model on BEV sequences. Based on these object-centric representations, we then train a transformer to learn to drive as well as reason about the future of other vehicles. We found that object-centric slot representations outperform both scene-level and object-level approaches that use the exact attributes of objects. Slot representations naturally incorporate information about objects from their spatial and temporal context such as position, heading, and speed without explicitly providing it. Our model with slots achieves an increased completion rate of the provided routes and, consequently, a higher driving score, with a lower variance across multiple runs, affirming slots as a reliable alternative in object-centric approaches. Additionally, we validate our model's performance as a world model through forecasting experiments, demonstrating its capability to predict future slot representations accurately. The code and the pre-trained models can be found at https://kuis-ai.github.io/CarFormer/.
翻译:表征选择在自动驾驶中起着关键作用。鸟瞰图(BEV)表征近年来展现出卓越性能。本文提出在BEV中学习对象中心表征,将复杂场景提炼为更适用于自动驾驶的可行信息。我们首先通过BEV序列上的槽注意力模型学习将对象分配到槽中。基于这些对象中心表征,随后训练Transformer模型学习驾驶行为并推理其他车辆的未来状态。研究发现,对象中心槽表征在性能上超越了使用对象精确属性的场景级和对象级方法。槽表征通过时空上下文(如位置、航向和速度)自然整合对象信息,而无需显式提供这些属性。采用槽表征的模型在给定路线上的完成率显著提升,从而获得更高的驾驶评分,且在多轮测试中表现出更低的方差,这证实了槽表征可作为对象中心方法中可靠的替代方案。此外,我们通过预测实验验证了模型作为世界模型的性能,证明其能准确预测未来槽表征。代码与预训练模型可在 https://kuis-ai.github.io/CarFormer/ 获取。