As large language models (LLMs) evolve into autonomous agents, their real-world applicability has expanded significantly, accompanied by new security challenges. Most existing agent defense mechanisms adopt a mandatory checking paradigm, in which security validation is forcibly triggered at predefined stages of the agent lifecycle. In this work, we argue that effective agent security should be intrinsic and selective rather than architecturally decoupled and mandatory. We propose Spider-Sense framework, an event-driven defense framework based on Intrinsic Risk Sensing (IRS), which allows agents to maintain latent vigilance and trigger defenses only upon risk perception. Once triggered, the Spider-Sense invokes a hierarchical defence mechanism that trades off efficiency and precision: it resolves known patterns via lightweight similarity matching while escalating ambiguous cases to deep internal reasoning, thereby eliminating reliance on external models. To facilitate rigorous evaluation, we introduce S$^2$Bench, a lifecycle-aware benchmark featuring realistic tool execution and multi-stage attacks. Extensive experiments demonstrate that Spider-Sense achieves competitive or superior defense performance, attaining the lowest Attack Success Rate (ASR) and False Positive Rate (FPR), with only a marginal latency overhead of 8.3\%.
翻译:随着大语言模型(LLMs)演化为自主智能体,其现实世界适用性显著扩展,同时也带来了新的安全挑战。现有的大多数智能体防御机制采用强制检查范式,即在智能体生命周期的预定义阶段强制触发安全验证。本文认为,有效的智能体安全应具备内在性与选择性,而非架构解耦与强制性。我们提出Spider-Sense框架,一种基于内在风险感知的事件驱动防御框架,使智能体能够保持潜在警觉性,仅在感知到风险时触发防御机制。一旦触发,Spider-Sense将调用分层防御机制,在效率与精度之间进行权衡:通过轻量级相似性匹配解决已知模式,同时将模糊案例升级至深度内部推理,从而消除对外部模型的依赖。为促进严谨评估,我们引入了S$^2$Bench,这是一个具备生命周期感知能力的基准测试集,包含真实的工具执行与多阶段攻击场景。大量实验表明,Spider-Sense实现了具有竞争力或更优的防御性能,获得了最低的攻击成功率与误报率,且仅产生8.3%的边际延迟开销。