Human oversight of AI is promoted as a safeguard against risks such as inaccurate outputs, system malfunctions, or violations of fundamental rights, and is mandated in regulation like the European AI Act. Yet debates on human oversight have largely focused on its effectiveness, while overlooking a critical dimension: the security of human oversight. We argue that human oversight creates a new attack surface within the safety, security, and accountability architecture of AI operations. Drawing on cybersecurity perspectives, we model human oversight as an IT application for the purpose of systematic threat modeling of the human oversight process. Threat modeling allows us to identify security risks within human oversight and points towards possible mitigation strategies. Our contributions are: (1) introducing a security perspective on human oversight, (2) offering researchers and practitioners guidance on how to approach their human oversight applications from a security point of view, and (3) providing a systematic overview of attack vectors and hardening strategies to enable secure human oversight of AI.
翻译:AI的人类监督被提倡作为防范风险(如输出不准确、系统故障或侵犯基本权利)的保障措施,并在《欧洲人工智能法案》等法规中被强制要求。然而,关于人类监督的讨论大多聚焦于其有效性,而忽视了一个关键维度:人类监督的安全性。我们认为,人类监督在AI运行的安全、安保与问责架构中创造了一个新的攻击面。借鉴网络安全视角,我们将人类监督建模为一个IT应用,以便对人类监督过程进行系统性的威胁建模。威胁建模使我们能够识别人类监督中的安全风险,并指向可能的缓解策略。我们的贡献包括:(1)引入人类监督的安全视角,(2)为研究人员和从业者提供如何从安全角度处理其人类监督应用的指导,以及(3)系统性地概述攻击向量与加固策略,以实现AI的安全人类监督。