As AI agents automate critical workloads, they remain vulnerable to indirect prompt injection (IPI) attacks. Current defenses rely on monitoring protocols that jointly evaluate an agent's Chain-of-Thought (CoT) and tool-use actions to ensure alignment with user intent. We demonstrate that these monitoring-based defenses can be bypassed via a novel Agent-as-a-Proxy attack, where prompt injection attacks treat the agent as a delivery mechanism, bypassing both agent and monitor simultaneously. While prior work on scalable oversight has focused on whether small monitors can supervise large agents, we show that even frontier-scale monitors are vulnerable. Large-scale monitoring models like Qwen2.5-72B can be bypassed by agents with similar capabilities, such as GPT-4o mini and Llama-3.1-70B. On the AgentDojo benchmark, we achieve a high attack success rate against AlignmentCheck and Extract-and-Evaluate monitors under diverse monitoring LLMs. Our findings suggest current monitoring-based agentic defenses are fundamentally fragile regardless of model scale.
翻译:随着AI代理自动化关键工作负载,它们仍然容易受到间接提示注入攻击。当前防御依赖于监控协议,该协议联合评估代理的思维链和工具使用行为,以确保与用户意图保持一致。我们证明,这些基于监控的防御可以通过一种新颖的“代理作为代理”攻击被绕过,其中提示注入攻击将代理视为传递机制,同时绕过代理和监控器。虽然先前关于可扩展监督的研究关注小型监控器能否监督大型代理,但我们表明即使是前沿规模的监控器也容易受到攻击。像Qwen2.5-72B这样的大规模监控模型可以被具有类似能力的代理(如GPT-4o mini和Llama-3.1-70B)绕过。在AgentDojo基准测试中,我们在多种监控大语言模型下,对AlignmentCheck和Extract-and-Evaluate监控器实现了高攻击成功率。我们的研究结果表明,无论模型规模如何,当前基于监控的代理防御本质上都是脆弱的。