Traffic accidents present complex challenges for autonomous driving, often featuring unpredictable scenarios that hinder accurate system interpretation and responses. Nonetheless, prevailing methodologies fall short in elucidating the causes of accidents and proposing preventive measures due to the paucity of training data specific to accident scenarios. In this work, we introduce AVD2 (Accident Video Diffusion for Accident Video Description), a novel framework that enhances accident scene understanding by generating accident videos that aligned with detailed natural language descriptions and reasoning, resulting in the contributed EMM-AU (Enhanced Multi-Modal Accident Video Understanding) dataset. Empirical results reveal that the integration of the EMM-AU dataset establishes state-of-the-art performance across both automated metrics and human evaluations, markedly advancing the domains of accident analysis and prevention. Project resources are available at https://an-answer-tree.github.io
翻译:交通事故为自动驾驶带来了复杂挑战,其场景往往难以预测,阻碍了系统的准确解读与响应。然而,由于缺乏针对事故场景的专门训练数据,现有方法在阐明事故成因及提出预防措施方面存在不足。本文提出AVD2(面向事故视频描述的事故视频扩散模型),这是一种新颖的框架,它通过生成与详细自然语言描述及推理相匹配的事故视频,来增强对事故场景的理解,并由此构建了贡献性的EMM-AU(增强型多模态事故视频理解)数据集。实证结果表明,集成EMM-AU数据集在自动化评估指标和人工评估中均达到了最先进的性能水平,显著推动了事故分析与预防领域的发展。项目资源发布于 https://an-answer-tree.github.io。