Standard models of bounded rationality typically assume agents either possess accurate knowledge of the population's reasoning abilities (Cognitive Hierarchy) or hold dogmatic, degenerate beliefs (Level-$k$). We introduce the ``Connected Minds'' model, which unifies these frameworks by integrating iterative reasoning with a parameterized network bias. We posit that agents do not observe the global population; rather, they observe a sample biased by their network position, governed by a locality parameter $p$ representing algorithmic ranking, social homophily, or information disclosure. We show that this parameter acts as a continuous bridge: the model collapses to the myopic Level-$k$ recursion as networks become opaque ($p \to 0$) and recovers the standard Cognitive Hierarchy model under full transparency ($p=1$). Theoretically, we establish that network opacity induces a \emph{Sophisticated Bias}, causing agents to systematically overestimate the cognitive depth of their opponents while preserving the log-concavity of belief distributions. This makes $p$ an actionable lever: a planner or platform can tune transparency -- globally or by segment (a personalized $p_k$) -- to shape equilibrium behavior. From a mechanism design perspective, we derive the \emph{Escalation Principle}: in games of strategic complements, restricting information can maximize aggregate effort by trapping agents in echo chambers where they compete against hallucinated, high-sophistication peers. Conversely, we identify a \emph{Transparency Reversal} for coordination games, where maximizing network visibility is required to minimize variance and stabilize outcomes. Our results suggest that network topology functions as a cognitive zoom lens, determining whether agents behave as local imitators or global optimizers.
翻译:标准的有限理性模型通常假设智能体要么准确掌握群体推理能力(认知层级模型),要么持有僵化的退化信念(Level-$k$模型)。我们提出“互联心智”模型,通过将迭代推理与参数化的网络偏差相结合,统一了上述理论框架。我们认为智能体无法观测全局群体,而是通过受其网络位置影响的抽样进行观察,该过程由表征算法排序、社会同质性或信息披露的局部性参数 $p$ 所控制。我们证明该参数构成了连续的桥梁:当网络趋于不透明时($p \to 0$),模型退化为短视的 Level-$k$ 递归;在完全透明条件下($p=1$)则恢复为标准认知层级模型。理论上,我们证实网络不透明度会引发“精密偏差”,导致智能体系统性高估对手的认知深度,同时保持信念分布的对数凹性。这使得 $p$ 成为可操作的调控杠杆:规划者或平台可通过调节透明度——全局性或分群体调节(个性化参数 $p_k$)——来塑造均衡行为。从机制设计视角,我们推导出“升级原理”:在策略互补博弈中,限制信息能通过将智能体困于回声室而最大化总努力值,使其与虚构的高阶认知对手竞争。相反,我们在协调博弈中发现“透明度逆转”现象,此时需要最大化网络可见性以降低方差并稳定结果。我们的研究表明,网络拓扑结构发挥着认知变焦镜的功能,决定着智能体表现为局部模仿者还是全局优化者。