Agentic AI systems-autonomous entities capable of independent planning and execution-reshape the landscape of human-AI trust. Long before direct system exposure, user expectations are mediated through high-stakes public discourse on social platforms. However, platform-mediated engagement signals (e.g., upvotes) may inadvertently function as a ``credibility proxy,'' potentially stifling critical evaluation. This paper investigates the interplay between social proof and verification timing in online discussions of agentic AI. Analyzing a longitudinal dataset from two distinct Reddit communities with contrasting interaction cultures-r/OpenClaw and r/Moltbook-we operationalize verification cues via reproducible lexical rules and model the ``time-to-first-verification'' using a right-censored survival analysis framework. Our findings reveal a systemic ``Popularity Paradox'': high-visibility discussions in both subreddits experience significantly delayed or entirely absent verification cues compared to low-visibility threads. This temporal lag creates a critical window for ``Narrative Lock-in,'' where early, unverified claims crystallize into collective cognitive biases before evidence-seeking behaviors emerge. We discuss the implications of this ``credibility-by-visibility'' effect for AI safety and propose ``epistemic friction'' as a design intervention to rebalance engagement-driven platforms.
翻译:智能体AI系统——能够独立规划和执行的自主实体——正在重塑人机信任的格局。在用户直接接触系统之前很久,其期望就已通过社交媒体平台上高风险公共话语的中介而形成。然而,平台中介的参与度信号(如点赞)可能无意中充当了“可信度代理”,潜在地抑制了批判性评估。本文研究了在线智能体AI讨论中社会认同与验证时机之间的相互作用。通过分析来自两个具有不同互动文化的Reddit社区(r/OpenClaw和r/Moltbook)的纵向数据集,我们通过可复现的词汇规则对验证线索进行操作化,并使用右删失生存分析框架对“首次验证时间”进行建模。我们的研究结果揭示了一个系统性的“流行度悖论”:与低可见性讨论串相比,两个子论坛中高可见性讨论的验证线索均出现显著延迟或完全缺失。这种时间滞后为“叙事锁定”创造了关键窗口期,即早期未经证实的说法在证据寻求行为出现之前固化为集体认知偏差。我们讨论了这种“以可见性代可信度”效应对AI安全的影响,并提出将“认知摩擦”作为一种设计干预措施,以重新平衡以参与度驱动的平台。