Scalable and realistic simulation of multi-agent traffic behavior is critical for advancing autonomous driving technologies. Although existing data-driven simulators have made significant strides in this domain, they predominantly rely on supervised learning to align simulated distributions with real-world driving scenarios. A persistent challenge, however, lies in the distributional shift that arises between training and testing, which often undermines model generalization in unseen environments. To address this limitation, we propose SMART-R1, a novel R1-style reinforcement fine-tuning paradigm tailored for next-token prediction models to better align agent behavior with human preferences and evaluation metrics. Our approach introduces a metric-oriented policy optimization algorithm to improve distribution alignment and an iterative "SFT-RFT-SFT" training strategy that alternates between Supervised Fine-Tuning (SFT) and Reinforcement Fine-Tuning (RFT) to maximize performance gains. Extensive experiments on the large-scale Waymo Open Motion Dataset (WOMD) validate the effectiveness of this simple yet powerful R1-style training framework in enhancing foundation models. The results on the Waymo Open Sim Agents Challenge (WOSAC) showcase that SMART-R1 achieves state-of-the-art performance with an overall realism meta score of 0.7858, ranking first on the leaderboard at the time of submission.
翻译:可扩展且真实的多智能体交通行为仿真是推动自动驾驶技术发展的关键。尽管现有数据驱动的仿真器在该领域已取得显著进展,但其主要依赖监督学习来对齐仿真分布与真实驾驶场景。然而,训练与测试间存在的分布偏移这一持续挑战,往往会削弱模型在未见环境中的泛化能力。为应对这一局限,我们提出SMART-R1——一种专为下一词元预测模型设计的R1风格强化微调新范式,旨在使智能体行为更贴合人类偏好与评估指标。该方法引入了面向度量的策略优化算法以改进分布对齐,并提出迭代式"SFT-RFT-SFT"训练策略,通过监督微调与强化微调的交替执行实现性能增益最大化。在大规模Waymo开放运动数据集上的大量实验验证了这种简洁而强大的R1风格训练框架在增强基础模型方面的有效性。Waymo开放仿真智能体挑战赛的结果表明,SMART-R1以0.7858的整体真实度元分数取得了最先进的性能,在提交时位列排行榜首位。