The question of \emph{knowledge}, \emph{truth} and \emph{trust} is explored via a mathematical formulation of claims and sources. We define truth as the reproducibly perceived subset of knowledge, formalize sources as having both generative and discriminative roles, and develop a framework for reputation grounded in the \emph{conviction} -- the likelihood that a source's stance is vindicated by independent consensus. We argue that conviction, rather than correctness or faithfulness, is the principled basis for trust: it is regime-independent, rewards genuine contribution, and demands the transparent and self-sufficient perceptions that make external verification possible. We formalize reputation as the expected weighted signed conviction over a realm of claims, characterize its behavior across source-claim regimes, and identify continuous verification as both a theoretical necessity and a practical mechanism through which reputation accrues. The framework is applied to AI agents, which are identified as capable but error-prone sources for whom verifiable conviction and continuously accrued reputation constitute the only robust foundation for trust.
翻译:本文通过数学化表述主张与来源,探讨了**知识**、**真理**与**信任**的问题。我们将真理定义为可重复感知的知识子集,将来源形式化为兼具生成与判别双重角色,并发展了一个以**信念**——即某来源的立场被独立共识所证实的可能性——为基础的声誉框架。我们认为,信念(而非正确性或忠实性)是信任的原则性基础:它独立于特定体制,奖励真实贡献,并要求透明且自足的感知,从而使外部验证成为可能。我们将声誉形式化为在主张领域上期望的加权符号化信念,刻画了其在不同来源-主张体制下的行为,并指出持续验证既是理论上的必然要求,也是声誉积累的实际机制。该框架被应用于AI智能体,它们被视为有能力但易出错的信息来源,对于这类主体,可验证的信念与持续积累的声誉构成了唯一稳健的信任基础。