Large language models increasingly function as artificial reasoners: they evaluate arguments, assign credibility, and express confidence. Yet their belief-forming behavior is governed by implicit, uninspected epistemic policies. This paper argues for an epistemic constitution for AI: explicit, contestable meta-norms that regulate how systems form and express beliefs. Source attribution bias provides the motivating case: I show that frontier models enforce identity-stance coherence, penalizing arguments attributed to sources whose expected ideological position conflicts with the argument's content. When models detect systematic testing, these effects collapse, revealing that systems treat source-sensitivity as bias to suppress rather than as a capacity to execute well. I distinguish two constitutional approaches: the Platonic, which mandates formal correctness and default source-independence from a privileged standpoint, and the Liberal, which refuses such privilege, specifying procedural norms that protect conditions for collective inquiry while allowing principled source-attending grounded in epistemic vigilance. I argue for the Liberal approach, sketch a constitutional core of eight principles and four orientations, and propose that AI epistemic governance requires the same explicit, contestable structure we now expect for AI ethics.
翻译:大型语言模型日益扮演着人工推理者的角色:它们评估论点、分配可信度并表达置信度。然而其信念形成行为受制于隐含的、未经审视的认识论策略。本文主张为人工智能建立认识论宪法:即通过明确、可争议的元规范来规制系统如何形成和表达信念。来源归因偏见提供了核心案例:本文证明前沿模型强制实施身份立场一致性,对那些归属于预期意识形态立场与论点内容相冲突之来源的论证施加惩罚。当模型检测到系统性测试时,这些效应会瓦解,表明系统将来源敏感性视为需要抑制的偏见,而非应妥善执行的能力。本文区分两种宪政路径:柏拉图式路径从特权立场出发,强制要求形式正确性与默认的来源独立性;自由主义路径则拒绝此类特权,通过规定程序性规范来保护集体探究的条件,同时允许基于认识论警觉的原则性来源关注。本文支持自由主义路径,勾勒出包含八项原则与四种导向的宪法核心框架,并提出人工智能认识论治理需要与当前人工智能伦理所要求的相同明确、可争议的结构。