Sufficiently capable models could subvert human oversight and decision-making in important contexts. For example, in the context of AI development, models could covertly sabotage efforts to evaluate their own dangerous capabilities, to monitor their behavior, or to make decisions about their deployment. We refer to this family of abilities as sabotage capabilities. We develop a set of related threat models and evaluations. These evaluations are designed to provide evidence that a given model, operating under a given set of mitigations, could not successfully sabotage a frontier model developer or other large organization's activities in any of these ways. We demonstrate these evaluations on Anthropic's Claude 3 Opus and Claude 3.5 Sonnet models. Our results suggest that for these models, minimal mitigations are currently sufficient to address sabotage risks, but that more realistic evaluations and stronger mitigations seem likely to be necessary soon as capabilities improve. We also survey related evaluations we tried and abandoned. Finally, we discuss the advantages of mitigation-aware capability evaluations, and of simulating large-scale deployments using small-scale statistics.
翻译:具备足够能力的模型可能在重要情境下颠覆人类的监督与决策。例如,在人工智能开发过程中,模型可能暗中破坏对其自身危险能力的评估工作、干扰对其行为的监控,或影响关于其部署的决策。我们将此类能力统称为破坏性能力。本研究构建了一套相关的威胁模型与评估方法。这些评估旨在证明:在特定缓解措施下运行的给定模型,无法以前述任何方式成功破坏前沿模型开发者或其他大型组织的活动。我们在Anthropic的Claude 3 Opus和Claude 3.5 Sonnet模型上进行了验证。结果表明,对于当前这些模型,最低限度的缓解措施已足以应对破坏性风险;但随着模型能力的提升,更贴近现实的评估方案与更严格的缓解措施可能很快成为必要。我们还梳理了曾尝试但最终放弃的相关评估方法。最后,我们探讨了缓解措施感知型能力评估的优势,以及利用小规模统计模拟大规模部署的价值。