Artificial intelligence governance exhibits a striking paradox: while major jurisdictions converge rhetorically around concepts such as safety, risk, and accountability, their regulatory frameworks remain fundamentally divergent and mutually unintelligible. This paper argues that this fragmentation cannot be explained solely by geopolitical rivalry, institutional complexity, or instrument selection. Instead, it stems from how AI is constituted as an object of governance through distinct institutional logics. Integrating securitisation theory with the concept of the dispositif, we demonstrate that jurisdictions govern ontologically different objects under the same vocabulary. Using semantic network analysis of official policy texts from the European Union, the United States, and China (2023-2025), we trace how concepts like safety are embedded within divergent semantic architectures. Our findings reveal that the EU juridifies AI as a certifiable product through legal-bureaucratic logic; the US operationalises AI as an optimisable system through market-liberal logic; and China governs AI as socio-technical infrastructure through holistic state logic. We introduce the concept of structural incommensurability to describe this condition of ontological divergence masked by terminological convergence. This reframing challenges ethics-by-principles approaches to global AI governance, suggesting that coordination failures arise not from disagreement over values but from the absence of a shared reference object.
翻译:人工智能治理呈现出一个显著的悖论:尽管主要司法管辖区在安全、风险与问责等概念上言辞趋同,其监管框架却保持根本性分歧且相互难以理解。本文认为,这种碎片化现象无法仅通过地缘政治竞争、制度复杂性或工具选择来解释。相反,它源于人工智能如何通过不同的制度逻辑被建构为治理对象。通过将安全化理论与"装置"概念相结合,我们论证了不同司法管辖区在相同词汇下治理的是本体论层面相异的对象。基于对欧盟、美国和中国(2023-2025年)官方政策文本的语义网络分析,我们追溯了"安全"等概念如何嵌入不同的语义架构。研究发现:欧盟通过法律-官僚逻辑将人工智能司法化为可认证的产品;美国通过市场-自由逻辑将人工智能操作化为可优化的系统;中国则通过整体性国家逻辑将人工智能治理为社会-技术基础设施。我们提出"结构不可通约性"概念来描述这种术语趋同掩盖下的本体论分歧状态。这一重构挑战了全球人工智能治理中基于原则的伦理路径,表明协调失败并非源于价值分歧,而是由于缺乏共同的参照对象。