As autonomous driving systems increasingly become part of daily transportation, the ability to accurately anticipate and mitigate potential traffic accidents is paramount. Traditional accident anticipation models primarily utilizing dashcam videos are adept at predicting when an accident may occur but fall short in localizing the incident and identifying involved entities. Addressing this gap, this study introduces a novel framework that integrates Large Language Models (LLMs) to enhance predictive capabilities across multiple dimensions--what, when, and where accidents might occur. We develop an innovative chain-based attention mechanism that dynamically adjusts to prioritize high-risk elements within complex driving scenes. This mechanism is complemented by a three-stage model that processes outputs from smaller models into detailed multimodal inputs for LLMs, thus enabling a more nuanced understanding of traffic dynamics. Empirical validation on the DAD, CCD, and A3D datasets demonstrates superior performance in Average Precision (AP) and Mean Time-To-Accident (mTTA), establishing new benchmarks for accident prediction technology. Our approach not only advances the technological framework for autonomous driving safety but also enhances human-AI interaction, making predictive insights generated by autonomous systems more intuitive and actionable.
翻译:随着自动驾驶系统日益融入日常交通,准确预测并缓解潜在交通事故的能力变得至关重要。传统事故预测模型主要利用行车记录仪视频,擅长预测事故可能发生的时间,但在定位事故位置及识别涉事实体方面存在不足。为弥补这一缺陷,本研究引入了一种集成大语言模型(LLMs)的新框架,以增强在多个维度——事故可能发生的“何事”、“何时”与“何地”——的预测能力。我们开发了一种创新的链式注意力机制,该机制能动态调整以优先处理复杂驾驶场景中的高风险要素。该机制辅以一个三阶段模型,将较小模型的输出处理为LLMs的详细多模态输入,从而实现对交通动态更细致的理解。在DAD、CCD和A3D数据集上的实证验证表明,该方法在平均精度(AP)和平均事故前时间(mTTA)上均表现出优越性能,为事故预测技术设立了新基准。我们的方法不仅推进了自动驾驶安全的技术框架,还增强了人机交互,使自主系统生成的预测洞察更直观、更具可操作性。