Developing human-controllable artificial intelligence (AI) and achieving meaningful human control (MHC) has become a vital principle to address these challenges, ensuring ethical alignment and effective governance in AI. MHC is also a critical focus in human-centered AI (HCAI) research and application. This chapter systematically examines MHC in AI, articulating its foundational principles and future trajectory. MHC is not simply the right to operate, but the unity of human understanding, intervention, and the traceablity of responsibility in AI decision-making, which requires technological design, AI governance, and humans to play a role together. MHC ensures AI autonomy serves humans without constraining technological progress. The mode of human control needs to match the levels of technology, and human supervision should balance the trust and doubt of AI. For future AI systems, MHC mandates human controllability as a prerequisite, requiring: (1) technical architectures with embedded mechanisms for human control; (2) human-AI interactions optimized for better access to human understanding; and (3) the evolution of AI systems harmonizing intelligence and human controllability. Governance must prioritize HCAI strategies: policies balancing innovation and risk mitigation, human-centered participatory frameworks transcending technical elite dominance, and global promotion of MHC as a universal governance paradigm to safeguard HCAI development. Looking ahead, there is a need to strengthen interdisciplinary research on the controllability of AI systems, enhance ethical and legal awareness among stakeholders, moving beyond simplistic technology design perspectives, focus on the knowledge construction, complexity interpretation, and influencing factors surrounding human control. By fostering MHC, the development of human-controllable AI can be further advanced, delivering HCAI systems.
翻译:开发人类可控人工智能(AI)并实现有意义的人类控制(MHC)已成为应对这些挑战、确保人工智能伦理对齐与有效治理的重要原则。MHC同样是以人为中心人工智能(HCAI)研究与应用的焦点议题。本章系统审视人工智能中的MHC,阐明其基本原则与未来路径。MHC不仅是操作权限,更是人类在AI决策中的理解、干预与责任追溯的统一体,这需要技术设计、AI治理与人类角色共同发挥作用。MHC确保AI自主性服务于人类而不制约技术进步。人类控制模式需与技术层级相匹配,人类监督应在对AI的信任与质疑间取得平衡。对于未来AI系统,MHC要求将人类可控性作为先决条件,需要:(1)具备嵌入式人类控制机制的技术架构;(2)优化人机交互以提升人类理解的可及性;(3)协调智能发展与人类可控性的AI系统演进。治理必须优先实施HCAI策略:平衡创新与风险管控的政策框架、超越技术精英主导的以人为本参与机制,以及将MHC作为全球治理范式推广以保障HCAI发展。展望未来,需加强AI系统可控性的跨学科研究,提升利益相关者的伦理与法律意识,超越单纯技术设计视角,聚焦人类控制相关的知识建构、复杂性阐释与影响因素。通过培育MHC,人类可控AI的发展将得以推进,最终实现以人为中心的AI系统。