Large Language Models (LLMs) are increasingly instantiated as interacting agents in multi-agent systems (MAS), where collective decisions emerge through social interaction rather than independent reasoning. A fundamental yet underexplored mechanism in this process is conformity, the tendency of agents to align their judgments with prevailing group opinions. This paper presents a systematic study of how network topology shapes conformity dynamics in LLM-based MAS through a misinformation detection task. We introduce a confidence-normalized pooling rule that controls the trade-off between self-reliance and social influence, enabling comparisons between two canonical decision paradigms: Centralized Aggregation and Distributed Consensus. Experimental results demonstrate that network topology critically governs both the efficiency and robustness of collective judgments. Centralized structures enable immediate decisions but are sensitive to hub competence and exhibit same-model alignment biases. In contrast, distributed structures promote more robust consensus, while increased network connectivity speeds up convergence but also heightens the risk of wrong-but-sure cascades, in which agents converge on incorrect decisions with high confidence. These findings characterize the conformity dynamics in LLM-based MAS, clarifying how network topology and self-social weighting jointly shape the efficiency, robustness, and failure modes of collective decision-making.
翻译:大型语言模型(LLM)正日益被实例化为多智能体系统(MAS)中相互作用的智能体,其中集体决策通过社会互动而非独立推理产生。在此过程中,一个基本但尚未被充分探索的机制是从众性,即智能体倾向于使其判断与主流群体意见保持一致。本文通过一个虚假信息检测任务,系统研究了网络拓扑结构如何塑造基于LLM的MAS中的从众动力学。我们引入了一种置信度归一化的池化规则,该规则控制了自我依赖与社会影响力之间的权衡,从而能够比较两种经典的决策范式:集中式聚合与分布式共识。实验结果表明,网络拓扑结构关键性地支配着集体判断的效率和鲁棒性。集中式结构能够实现即时决策,但对枢纽节点的能力敏感,并表现出同模型对齐偏差。相比之下,分布式结构促进了更稳健的共识,而网络连通性的增加虽然加速了收敛,但也提高了错误但确信级联的风险,即智能体以高置信度收敛于错误的决策。这些发现刻画了基于LLM的MAS中的从众动力学,阐明了网络拓扑结构和自我-社会权重如何共同塑造集体决策的效率、鲁棒性和失效模式。