Online platforms increasingly rely on opinion aggregation to allocate real-world attention and resources, yet common signals such as engagement votes or capital-weighted commitments are easy to amplify and often track visibility rather than reliability. This makes collective judgments brittle under weak truth signals, noisy or delayed feedback, early popularity surges, and strategic manipulation. We propose Credibility Governance (CG), a mechanism that reallocates influence by learning which agents and viewpoints consistently track evolving public evidence. CG maintains dynamic credibility scores for both agents and opinions, updates opinion influence via credibility-weighted endorsements, and updates agent credibility based on the long-run performance of the opinions they support, rewarding early and persistent alignment with emerging evidence while filtering short-lived noise. We evaluate CG in POLIS, a socio-physical simulation environment that models coupled belief dynamics and downstream feedback under uncertainty. Across settings with initial majority misalignment, observation noise and contamination, and misinformation shocks, CG outperforms vote-based, stake-weighted, and no-governance baselines, yielding faster recovery to the true state, reduced lock-in and path dependence, and improved robustness under adversarial pressure. Our implementation and experimental scripts are publicly available at https://github.com/Wanying-He/Credibility_Governance.
翻译:在线平台日益依赖意见聚合来分配现实世界的关注与资源,然而常见的信号(如参与投票或资本加权承诺)易于放大,且往往追踪可见性而非可靠性。这使得集体判断在弱真相信号、噪声或延迟反馈、早期流行度激增以及策略性操纵下显得脆弱。我们提出可信度治理(CG),该机制通过学习哪些主体与观点能持续追踪不断演化的公共证据,从而重新分配影响力。CG为参与主体与观点同时维护动态可信度评分,通过可信度加权背书更新观点影响力,并依据主体所支持观点的长期表现更新其可信度,奖励与新兴证据早期且持续的对齐,同时过滤短暂噪声。我们在POLIS(一个模拟不确定性下耦合信念动态与下游反馈的社会物理仿真环境)中评估CG。在初始多数错位、观测噪声与污染以及虚假信息冲击等多种设定下,CG的表现优于基于投票、权益加权及无治理的基线方法,能够更快恢复至真实状态,减少锁定效应与路径依赖,并在对抗压力下提升鲁棒性。我们的实现代码与实验脚本已在 https://github.com/Wanying-He/Credibility_Governance 公开。