The persistent militarization of large reasoning models stems not from technical necessity but from governance arrangements that strip researchers of meaningful authority to refuse harmful transfers and deployments. Existing accountability mechanisms such as model cards and responsible AI statements operate as reputational signals detached from decision making architecture. We identify institutional veto power as a missing governance primitive: a formal authority to halt subsequent use or distribution of research when credible risks of weaponization emerge. Drawing on precedents in nuclear nonproliferation and biomedical ethics, the paper maps unprotected veto points across the research lifecycle, diagnose why compliance without enforceable constraints fails, and offer concrete institutional designs that embed veto authority while reducing the risk of political capture. The paper argues that communities most vulnerable to military uses must lead governance design, and that institutional veto power is a prerequisite for converting symbolic safeguards into enforceable responsibility and for achieving meaningful model disarmament.
翻译:大型推理模型的持续军事化并非源于技术必要性,而是源于治理安排剥夺了研究人员拒绝有害技术转移与部署的实质性权力。现有的问责机制(如模型卡片和负责任AI声明)仅作为声誉信号存在,与决策架构相脱节。本文将机构否决权识别为一种缺失的治理原语:当武器化风险显现时,阻止研究成果后续使用或分发的正式授权。借鉴核不扩散与生物医学伦理领域的先例,本文系统梳理研究生命周期中缺乏保护的否决节点,诊断为何缺乏强制约束的合规机制必然失效,并提出具体的制度设计方案——在嵌入否决权的同时降低政治俘获风险。本文主张最易受军事应用影响的群体必须主导治理设计,并指出机构否决权是将象征性保障转化为可执行责任、实现实质性模型裁军的前提条件。