Artificial General Intelligence (AGI) promises transformative benefits but also presents significant risks. We develop an approach to address the risk of harms consequential enough to significantly harm humanity. We identify four areas of risk: misuse, misalignment, mistakes, and structural risks. Of these, we focus on technical approaches to misuse and misalignment. For misuse, our strategy aims to prevent threat actors from accessing dangerous capabilities, by proactively identifying dangerous capabilities, and implementing robust security, access restrictions, monitoring, and model safety mitigations. To address misalignment, we outline two lines of defense. First, model-level mitigations such as amplified oversight and robust training can help to build an aligned model. Second, system-level security measures such as monitoring and access control can mitigate harm even if the model is misaligned. Techniques from interpretability, uncertainty estimation, and safer design patterns can enhance the effectiveness of these mitigations. Finally, we briefly outline how these ingredients could be combined to produce safety cases for AGI systems.
翻译:人工通用智能(AGI)有望带来变革性益处,但也伴随着重大风险。我们提出一种方法,以应对足以对人类造成重大伤害的后果性危害风险。我们识别了四个风险领域:滥用、错位、失误和结构性风险。其中,我们重点关注针对滥用和错位的技术性方法。对于滥用,我们的策略旨在通过主动识别危险能力,并实施稳健的安全措施、访问限制、监控以及模型安全缓解方案,来防止威胁行为者获取危险能力。为应对错位问题,我们概述了两道防线。首先,模型层面的缓解措施,如强化监督和鲁棒性训练,有助于构建一个对齐的模型。其次,系统级的安全措施,如监控和访问控制,即使在模型未对齐的情况下也能减轻危害。可解释性、不确定性估计和更安全的设计模式等技术可以增强这些缓解措施的有效性。最后,我们简要概述了如何将这些要素结合起来,为AGI系统构建安全案例。