The rise of AI agents introduces complex safety and security challenges arising from autonomous tool use and environmental interactions. Current guardrail models lack agentic risk awareness and transparency in risk diagnosis. To introduce an agentic guardrail that covers complex and numerous risky behaviors, we first propose a unified three-dimensional taxonomy that orthogonally categorizes agentic risks by their source (where), failure mode (how), and consequence (what). Guided by this structured and hierarchical taxonomy, we introduce a new fine-grained agentic safety benchmark (ATBench) and a Diagnostic Guardrail framework for agent safety and security (AgentDoG). AgentDoG provides fine-grained and contextual monitoring across agent trajectories. More Crucially, AgentDoG can diagnose the root causes of unsafe actions and seemingly safe but unreasonable actions, offering provenance and transparency beyond binary labels to facilitate effective agent alignment. AgentDoG variants are available in three sizes (4B, 7B, and 8B parameters) across Qwen and Llama model families. Extensive experimental results demonstrate that AgentDoG achieves state-of-the-art performance in agentic safety moderation in diverse and complex interactive scenarios. All models and datasets are openly released.
翻译:AI智能体的兴起带来了由自主工具使用和环境交互引发的复杂安全与防护挑战。现有护栏模型缺乏对智能体风险的感知能力及风险诊断的透明度。为构建一个覆盖复杂且大量风险行为的智能体护栏,我们首先提出了一种统一的三维分类法,从风险来源(何处)、失效模式(如何)与后果(什么)三个正交维度对智能体风险进行系统分类。在此结构化层次分类法的指导下,我们提出了一个新的细粒度智能体安全基准(ATBench)以及一个面向智能体安全与防护的诊断性护栏框架(AgentDoG)。AgentDoG能够在智能体轨迹上实现细粒度的情境化监控。更为关键的是,AgentDoG能够诊断不安全行为及看似安全但不合理行为的根本原因,提供超越二元标签的可追溯性与透明度,从而促进有效的智能体对齐。AgentDoG的变体在Qwen和Llama模型系列中提供了三种参数量规模(4B、7B和8B)。大量实验结果表明,AgentDoG在多样复杂的交互场景中实现了智能体安全管控的最先进性能。所有模型与数据集均已开源发布。