Autonomous Vehicles (AVs) collect and pseudo-label terabytes of multi-modal data localized to HD maps during normal fleet testing. However, identifying interesting and safety-critical scenarios from uncurated driving logs remains a significant challenge. Traditional scenario mining techniques are error-prone and prohibitively time-consuming, often relying on hand-crafted structured queries. In this work, we revisit spatio-temporal scenario mining through the lens of recent vision-language models (VLMs) to detect whether a described scenario occurs in a driving log and, if so, precisely localize it in both time and space. To address this problem, we introduce RefAV, a large-scale dataset of 10,000 diverse natural language queries that describe complex multi-agent interactions relevant to motion planning derived from 1000 driving logs in the Argoverse 2 Sensor dataset. We evaluate several referential multi-object trackers and present an empirical analysis of our baselines. Notably, we find that naively repurposing off-the-shelf VLMs yields poor performance, suggesting that scenario mining presents unique challenges. Lastly, we discuss our recently held competition and share insights from the community. Our code and dataset are available at https://github.com/CainanD/RefAV/ and https://argoverse.github.io/user-guide/tasks/scenario_mining.html
翻译:自动驾驶车辆(AVs)在常规车队测试期间会收集并伪标注数TB级与高精地图关联的多模态数据。然而,从未经筛选的驾驶日志中识别出具有意义且对安全至关重要的场景仍是一项重大挑战。传统的场景挖掘技术容易出错且耗时过长,通常依赖于人工构建的结构化查询。本研究通过近期视觉-语言模型(VLMs)的视角重新审视时空场景挖掘问题,旨在检测驾驶日志中是否出现所描述的场景,并在出现时精确地定位其时间和空间位置。针对该问题,我们提出了RefAV——一个包含10,000个多样化自然语言查询的大规模数据集,这些查询描述了源自Argoverse 2传感器数据集中1000条驾驶日志的、与运动规划相关的复杂多智能体交互行为。我们评估了多种指代式多目标跟踪器,并对基线模型进行了实证分析。值得注意的是,我们发现直接复用现成的VLMs会导致性能不佳,这表明场景挖掘任务具有独特的挑战性。最后,我们讨论了近期举办的竞赛并分享了来自社区的见解。我们的代码与数据集发布于https://github.com/CainanD/RefAV/ 与 https://argoverse.github.io/user-guide/tasks/scenario_mining.html