Oversight for agentic AI is often discussed as a single goal ("human control"), yet early adoption may produce role-specific expectations. We present a comparative analysis of two newly active Reddit communities in Jan--Feb 2026 that reflect different socio-technical roles: r/OpenClaw (deployment and operations) and r/Moltbook (agent-centered social interaction). We conceptualize this period as an early-stage crystallization phase, where oversight expectations form before norms reach equilibrium. Using topic modeling in a shared comparison space, a coarse-grained oversight-theme abstraction, engagement-weighted salience, and divergence tests, we show the communities are strongly separable (JSD =0.418, cosine =0.372, permutation $p=0.0005$). Across both communities, "human control" is an anchor term, but its operational meaning diverges: r/OpenClaw} emphasizes execution guardrails and recovery (action-risk), while r/Moltbook} emphasizes identity, legitimacy, and accountability in public interaction (meaning-risk). The resulting distinction offers a portable lens for designing and evaluating oversight mechanisms that match agent role, rather than applying one-size-fits-all control policies.
翻译:智能体AI的监督机制常被单一目标("人类控制")所概括,但早期采用可能催生角色特定的预期。本文对2026年1月至2月期间两个新兴Reddit社群进行对比分析,它们反映了不同的社会技术角色:r/OpenClaw(部署与运维)和r/Moltbook(以智能体为中心的社会交互)。我们将此阶段概念化为早期结晶期——在规范达到平衡前监督预期已然形成。通过共享比较空间的主题建模、粗粒度监督主题抽象、参与度加权显著性分析和分化检验,我们证明两个社群具有显著可分离性(JSD=0.418,余弦相似度=0.372,置换检验$p=0.0005$)。尽管"人类控制"在两个社群中均为锚定性术语,但其操作含义已然分化:r/OpenClaw强调执行护栏与恢复机制(行动风险),而r/Moltbook侧重公共交互中的身份认同、合法性与问责机制(意义风险)。这种区分结果为设计与评估匹配智能体角色的监督机制提供了可迁移的分析框架,而非推行一刀切的控制策略。