Large Language Models (LLMs) are increasingly integrated into vehicle-based digital assistants, where unsafe, ambiguous, or legally incorrect responses can lead to serious safety, ethical, and regulatory consequences. Despite growing interest in LLM safety, existing taxonomies and evaluation frameworks remain largely general-purpose and fail to capture the domain-specific risks inherent to real-world driving scenarios. In this paper, we introduce DriveSafe, a hierarchical, four-level risk taxonomy designed to systematically characterize safety-critical failure modes of LLM-based driving assistants. The taxonomy comprises 129 fine-grained atomic risk categories spanning technical, legal, societal, and ethical dimensions, grounded in real-world driving regulations and safety principles and reviewed by domain experts. To validate the safety relevance and realism of the constructed prompts, we evaluate their refusal behavior across six widely deployed LLMs. Our analysis shows that the evaluated models often fail to appropriately refuse unsafe or non-compliant driving-related queries, underscoring the limitations of general-purpose safety alignment in driving contexts.
翻译:大型语言模型(LLM)正日益集成到车载数字助手中,其中不安全、模糊或法律错误的响应可能导致严重的安全、伦理和监管后果。尽管对LLM安全性的关注日益增长,现有的分类体系和评估框架大多仍是通用型的,未能捕捉现实驾驶场景中固有的领域特定风险。本文提出DriveSafe,一个分层四级风险分类体系,旨在系统化表征基于LLM的驾驶助手的安全关键失效模式。该分类体系包含129个细粒度原子风险类别,涵盖技术、法律、社会及伦理维度,其构建以现实驾驶法规与安全原则为基础,并经过领域专家评审。为验证所构建提示词的安全相关性与现实性,我们在六种广泛部署的LLM上评估了其拒绝行为。分析表明,被评估模型常无法恰当地拒绝不安全或不合规的驾驶相关查询,这凸显了通用安全对齐方法在驾驶场景中的局限性。