As automated systems increasingly transition from decision support to direct execution, the problem of accountability shifts from decision quality to execution legitimacy. While optimization, execution, and feedback mechanisms are extensively modeled in contemporary AI and control architectures, the structural role of judgment remains undefined. Judgment is typically introduced as an external intervention rather than a native precondition to execution. This work does not propose a new decision-making algorithm or safety heuristic, but identifies a missing structural role in contemporary AI and control architectures. This paper identifies this absence as a missing Judgment Root Node and proposes LERA (Judgment-Governance Architecture) , a structural framework that enforces judgment as a mandatory, non-bypassable prerequisite for execution. LERA is founded on two axioms: (1) execution is not a matter of system capability, but of structural permission, and (2) execution is not the chronological successor of judgment, but its structural consequence. Together, these axioms decouple execution legitimacy from computational capacity and bind it to judgment completion through a governance gate. LERA does not aim to optimize decisions or automate judgment. Instead, it institutionalizes judgment as a first-class architectural component, ensuring that execution authority remains accountable. By reinstating judgment at the execution boundary, LERA establishes a foundational architecture for judgment-governed automation.
翻译:随着自动化系统日益从决策支持转向直接执行,问责问题也从决策质量转向执行合法性。尽管当代人工智能与控制架构对优化、执行和反馈机制进行了广泛建模,但判断的结构性作用仍未得到明确定义。判断通常被作为外部干预引入,而非执行的内在前提条件。本研究并未提出新的决策算法或安全启发式方法,而是识别出当代人工智能与控制架构中缺失的结构性角色。本文将这种缺失界定为"判断根节点"的缺位,并提出LERA(判断治理架构)——一个将判断强制规定为不可绕过的执行前提的结构性框架。LERA建立在两条公理之上:(1) 执行不是系统能力问题,而是结构性许可问题;(2) 执行不是判断的时间后继,而是其结构性结果。这些公理共同将执行合法性与计算能力解耦,并通过治理门将其与判断完成状态绑定。LERA的目标并非优化决策或自动化判断,而是将判断制度化为一级架构组件,确保执行权始终保持可问责性。通过在执行边界重设判断机制,LERA为判断治理的自动化建立了基础架构。