AI agents that autonomously interact with external tools and environments show great promise across real-world applications. However, the external data which agent consumes also leads to the risk of indirect prompt injection attacks, where malicious instructions embedded in third-party content hijack agent behavior. Guided by benchmarks, such as AgentDojo, there has been significant amount of progress in developing defense against the said attacks. As the technology continues to mature, and that agents are increasingly being relied upon for more complex tasks, there is increasing pressing need to also evolve the benchmark to reflect threat landscape faced by emerging agentic systems. In this work, we reveal three fundamental flaws in current benchmarks and push the frontier along these dimensions: (i) lack of dynamic open-ended tasks, (ii) lack of helpful instructions, and (iii) simplistic user tasks. To bridge this gap, we introduce AgentDyn, a manually designed benchmark featuring 60 challenging open-ended tasks and 560 injection test cases across Shopping, GitHub, and Daily Life. Unlike prior static benchmarks, AgentDyn requires dynamic planning and incorporates helpful third-party instructions. Our evaluation of ten state-of-the-art defenses suggests that almost all existing defenses are either not secure enough or suffer from significant over-defense, revealing that existing defenses are still far from real-world deployment. Our benchmark is available at https://github.com/leolee99/AgentDyn.
翻译:能够自主与外部工具和环境交互的AI智能体在现实应用中展现出巨大潜力。然而,智能体所处理的外部数据也带来了间接提示注入攻击的风险,即嵌入在第三方内容中的恶意指令可能劫持智能体行为。在诸如AgentDojo等基准的指导下,针对此类攻击的防御研究已取得显著进展。随着技术持续成熟,以及智能体日益承担更复杂的任务,迫切需要同步发展基准以反映新兴智能体系统所面临的威胁态势。本工作揭示了当前基准存在的三个根本缺陷,并沿以下维度推进前沿:(i) 缺乏动态开放式任务,(ii) 缺乏有益指令,(iii) 用户任务过于简化。为弥补这一差距,我们提出了AgentDyn——一个手工设计的基准,包含60项具有挑战性的开放式任务及涵盖购物、GitHub与日常生活的560个注入测试用例。与以往静态基准不同,AgentDyn要求动态规划能力并融合了有益的第三方指令。我们对十种前沿防御方案的评估表明,几乎所有现有防御方案要么安全性不足,要么存在严重的过度防御问题,这揭示出现有防御距离实际部署仍有较大差距。我们的基准已发布于 https://github.com/leolee99/AgentDyn。