Pioneering advancements in large language model-powered agents have underscored the design pattern of multi-agent collaboration, demonstrating that collective intelligence can surpass the capabilities of each individual. Inspired by the neural scaling law, which posits that increasing neurons leads to emergent abilities, this study investigates whether a similar principle applies to increasing agents in multi-agent collaboration. Technically, we propose multi-agent collaboration networks (MacNet), which utilize directed acyclic graphs to organize agents and streamline their interactive reasoning via topological ordering, with solutions derived from their dialogues. Extensive experiments show that MacNet consistently outperforms baseline models, enabling effective agent collaboration across various network topologies and supporting cooperation among more than a thousand agents. Notably, we observed a small-world collaboration phenomenon, where topologies resembling small-world properties achieved superior performance. Additionally, we identified a collaborative scaling law, indicating that normalized solution quality follows a logistic growth pattern as scaling agents, with collaborative emergence occurring much earlier than previously observed instances of neural emergence. The code and data will be available at https://github.com/OpenBMB/ChatDev.
翻译:大语言模型驱动的智能体研究取得了开创性进展,凸显了多智能体协作的设计范式,并证明集体智能能够超越个体能力。受神经缩放定律的启发——该定律认为增加神经元会导致涌现能力——本研究探讨了类似的原理是否适用于多智能体协作中智能体数量的增加。在技术上,我们提出了多智能体协作网络(MacNet),该网络利用有向无环图来组织智能体,并通过拓扑排序简化其交互推理过程,最终从其对话中推导出解决方案。大量实验表明,MacNet 始终优于基线模型,能够在各种网络拓扑结构中实现有效的智能体协作,并支持上千个智能体的协同工作。值得注意的是,我们观察到了小世界协作现象:具有类似小世界特性的拓扑结构取得了更优的性能。此外,我们发现了协作缩放定律,表明随着智能体规模的扩大,归一化的解决方案质量遵循逻辑增长模式,且协作涌现的发生时间远早于以往观察到的神经涌现实例。代码与数据将在 https://github.com/OpenBMB/ChatDev 发布。