Large language models (LLMs) have enabled multi-agent systems (MAS) in which multiple agents argue, critique, and coordinate to solve complex tasks, making communication topology a first-class design choice. Yet most existing LLM-based MAS either adopt fully connected graphs, simple sparse rings, or ad-hoc dynamic selection, with little structural guidance. In this work, we revisit classic theory on small-world (SW) networks and ask: what changes if we treat SW connectivity as a design prior for MAS? We first bridge insights from neuroscience and complex networks to MAS, highlighting how SW structures balance local clustering and long-range integration. Using multi-agent debate (MAD) as a controlled testbed, experiment results show that SW connectivity yields nearly the same accuracy and token cost, while substantially stabilizing consensus trajectories. Building on this, we introduce an uncertainty-guided rewiring scheme for scaling MAS, where long-range shortcuts are added between epistemically divergent agents using LLM-oriented uncertainty signals (e.g., semantic entropy). This yields controllable SW structures that adapt to task difficulty and agent heterogeneity. Finally, we discuss broader implications of SW priors for MAS design, framing them as stabilizers of reasoning, enhancers of robustness, scalable coordinators, and inductive biases for emergent cognitive roles.
翻译:大型语言模型(LLM)催生了多智能体系统(MAS),其中多个智能体通过辩论、批判与协作来解决复杂任务,使得通信拓扑成为了一类首要的设计选择。然而,现有大多数基于LLM的MAS要么采用全连接图、简单的稀疏环,要么采用临时动态选择,缺乏结构上的指导。在本工作中,我们重新审视了关于小世界(SW)网络的经典理论,并探讨:如果将SW连通性作为MAS的设计先验,会发生什么变化?我们首先将神经科学和复杂网络的洞见与MAS联系起来,强调了SW结构如何平衡局部聚类与长程整合。以多智能体辩论(MAD)作为受控测试平台,实验结果表明,SW连通性在保持几乎相同的准确性和令牌成本的同时,显著稳定了共识形成的轨迹。在此基础上,我们提出了一种用于扩展MAS的不确定性引导重连方案,该方案利用面向LLM的不确定性信号(例如语义熵),在认知差异较大的智能体之间添加长程捷径。这产生了可适应任务难度和智能体异质性的可控SW结构。最后,我们讨论了SW先验对MAS设计的更广泛意义,将其定位为推理过程的稳定器、鲁棒性的增强器、可扩展的协调器以及涌现认知角色的归纳偏置。