Motion prediction and planning are vital tasks in autonomous driving, and recent efforts have shifted to machine learning-based approaches. The challenges include understanding diverse road topologies, reasoning traffic dynamics over a long time horizon, interpreting heterogeneous behaviors, and generating policies in a large continuous state space. Inspired by the success of large language models in addressing similar complexities through model scaling, we introduce a scalable trajectory model called State Transformer (STR). STR reformulates the motion prediction and motion planning problems by arranging observations, states, and actions into one unified sequence modeling task. Our approach unites trajectory generation problems with other sequence modeling problems, powering rapid iterations with breakthroughs in neighbor domains such as language modeling. Remarkably, experimental results reveal that large trajectory models (LTMs), such as STR, adhere to the scaling laws by presenting outstanding adaptability and learning efficiency. Qualitative results further demonstrate that LTMs are capable of making plausible predictions in scenarios that diverge significantly from the training data distribution. LTMs also learn to make complex reasonings for long-term planning, without explicit loss designs or costly high-level annotations.
翻译:运动预测与规划是自动驾驶中的关键任务,近年研究已转向基于机器学习的方法。其挑战包括理解多样化的道路拓扑结构、推理长时序交通动态、解读异质化行为,以及在连续状态空间中生成策略。受大型语言模型通过模型扩展解决类似复杂性问题成功经验的启发,我们提出一种名为状态变换器(State Transformer, STR)的可扩展轨迹模型。STR通过将观测、状态和行动组织成统一的序列建模任务,重新定义了运动预测与运动规划问题。本方法将轨迹生成问题与其他序列建模问题统一,借助语言建模等相邻领域的突破性成果实现快速迭代。值得注意的是,实验结果表明,大型轨迹模型(LTMs)如STR遵循规模缩放定律,展现出卓越的适应性和学习效率。定性结果进一步证明,LTMs能够在与训练数据分布显著偏离的场景中做出合理预测。无需显式损失设计或昂贵的高层级标注,LTMs即可学习实现长期规划的复杂推理能力。