Generative AI can convert uncertainty into authoritative-seeming verdicts, displacing the justificatory work on which democratic epistemic agency depends. As a corrective, I propose a Brouwer-inspired assertibility constraint for responsible AI: in high-stakes domains, systems may assert or deny claims only if they can provide a publicly inspectable and contestable certificate of entitlement; otherwise they must return "Undetermined". This constraint yields a three-status interface semantics (Asserted, Denied, Undetermined) that cleanly separates internal entitlement from public standing while connecting them via the certificate as a boundary object. It also produces a time-indexed entitlement profile that is stable under numerical refinement yet revisable as the public record changes. I operationalize the constraint through decision-layer gating of threshold and argmax outputs, using internal witnesses (e.g., sound bounds or separation margins) and an output contract with reason-coded abstentions. A design lemma shows that any total, certificate-sound binary interface already decides the deployed predicate on its declared scope, so "Undetermined" is not a tunable reject option but a mandatory status whenever no forcing witness is available. By making outputs answerable to challengeable warrants rather than confidence alone, the paper aims to preserve epistemic agency where automated speech enters public justification.
翻译:生成式人工智能能够将不确定性转化为看似权威的判定,从而取代民主认知能动性所依赖的论证工作。作为一种纠正措施,我提出一种受布劳威尔启发的负责任人工智能可断言性约束:在高风险领域,系统仅当能够提供可公开审查与质疑的资格证明时,方可断言或否定主张;否则必须返回"未确定"状态。该约束产生了一种三态接口语义(已断言、已否定、未确定),清晰地区分了内部资格与公共地位,同时通过作为边界对象的证明书将二者联系起来。它还生成了一种时间索引的资格档案,该档案在数值细化下保持稳定,但可随公共记录的变化而修订。我通过决策层对阈值和argmax输出进行门控来实现该约束,利用内部见证(如可靠边界或分离裕度)以及带有原因编码弃权的输出契约。一个设计引理表明,任何完备且证明可靠的双态接口实际上已在其声明范围内判定所部署的谓词,因此"未确定"并非可调节的拒绝选项,而是在缺乏强制见证时必须呈现的状态。通过使输出结果接受可质疑的授权依据而非仅依赖置信度,本文旨在自动话语进入公共论证领域时,维护认知能动性。