AI agents powered by large language models (LLMs) are being deployed at scale, yet we lack a systematic understanding of how the choice of backbone LLM affects agent security. The non-deterministic sequential nature of AI agents complicates security modeling, while the integration of traditional software with AI components entangles novel LLM vulnerabilities with conventional security risks. Existing frameworks only partially address these challenges as they either capture specific vulnerabilities only or require modeling of complete agents. To address these limitations, we introduce threat snapshots: a framework that isolates specific states in an agent's execution flow where LLM vulnerabilities manifest, enabling the systematic identification and categorization of security risks that propagate from the LLM to the agent level. We apply this framework to construct the $b^3$ benchmark, a security benchmark based on 194,331 unique crowdsourced adversarial attacks. We then evaluate 34 popular LLMs with it, revealing, among other insights, that enhanced reasoning capabilities improve security, while model size does not correlate with security. We release our benchmark, dataset, and evaluation code to facilitate widespread adoption by LLM providers and practitioners, offering guidance for agent developers and incentivizing model developers to prioritize backbone security improvements.
翻译:由大语言模型驱动的AI智能体正在大规模部署,然而我们对于骨干大语言模型的选择如何影响智能体安全性仍缺乏系统性理解。AI智能体非确定性的序列特性使安全建模复杂化,而传统软件与AI组件的集成则将新型大语言模型漏洞与传统安全风险交织在一起。现有框架仅部分应对这些挑战,因为它们要么仅捕捉特定漏洞,要么需要对完整智能体进行建模。为克服这些局限,我们引入了威胁快照:一个在智能体执行流程中隔离大语言模型漏洞显现的特定状态的框架,从而能够系统地识别和分类从大语言模型传播至智能体层级的安全风险。我们应用该框架构建了$b^3$基准,这是一个基于194,331个独特众包对抗性攻击的安全基准。随后我们用它评估了34个主流大语言模型,揭示出增强的推理能力可提升安全性,而模型规模与安全性无相关性等重要发现。我们公开了基准、数据集和评估代码,以促进大语言模型提供商和实践者的广泛采用,为智能体开发者提供指导,并激励模型开发者优先改进骨干模型的安全性。