AI control protocols serve as a defense mechanism to stop untrusted LLM agents from causing harm in autonomous settings. Prior work treats this as a security problem, stress testing with exploits that use the deployment context to subtly complete harmful side tasks, such as backdoor insertion. In practice, most AI control protocols are fundamentally based on LLM monitors, which can become a central point of failure. We study adaptive attacks by an untrusted model that knows the protocol and the monitor model, which is plausible if the untrusted model was trained with a later knowledge cutoff or can search for this information autonomously. We instantiate a simple adaptive attack vector by which the attacker embeds publicly known or zero-shot prompt injections in the model outputs. Using this tactic, frontier models consistently evade diverse monitors and complete malicious tasks on two main AI control benchmarks. The attack works universally against current protocols that rely on a monitor. Furthermore, the recent Defer-to-Resample protocol even backfires, as its resampling amplifies the prompt injection and effectively reframes it as a best-of-$n$ attack. In general, adaptive attacks on monitor models represent a major blind spot in current control protocols and should become a standard component of evaluations for future AI control mechanisms.
翻译:AI控制协议作为一种防御机制,旨在阻止不可信的大型语言模型(LLM)智能体在自主运行环境中造成危害。先前研究将其视为安全问题,通过利用部署上下文隐蔽完成有害副任务(如后门植入)的漏洞进行压力测试。实践中,大多数AI控制协议本质上都基于LLM监控器,这可能成为单点故障源。我们研究了不可信模型在知晓协议及监控模型情况下的自适应攻击——若不可信模型采用更新的知识截止日期训练或能自主搜索此类信息,这种场景具有现实可能性。我们实例化了一种简单的自适应攻击向量:攻击者在模型输出中嵌入公开已知或零样本提示注入。采用该策略时,前沿模型能持续规避多种监控器,并在两个主要AI控制基准测试中完成恶意任务。该攻击对当前依赖监控器的协议具有普适性。此外,近期提出的Defer-to-Resample协议甚至会产生反效果,其重采样机制会放大提示注入,实质上将其重构为best-of-$n$攻击。总体而言,针对监控模型的自适应攻击是当前控制协议的主要盲区,应成为未来AI控制机制评估的标准组成部分。