Multi-agent systems powered by large language models exhibit strong capabilities in collaborative problem-solving. However, these systems suffer from substantial knowledge redundancy. Agents duplicate efforts in retrieval and reasoning processes. This inefficiency stems from a deeper issue: current architectures lack mechanisms to ensure agents share minimal sufficient information at each operational stage. Empirical analysis reveals an average knowledge duplication rate of 47.3\% across agent communications. We propose D3MAS (Decompose, Deduce, and Distribute), a hierarchical coordination framework addressing redundancy through structural design rather than explicit optimization. The framework organizes collaboration across three coordinated layers. Task decomposition filters irrelevant sub-problems early. Collaborative reasoning captures complementary inference paths across agents. Distributed memory provides access to non-redundant knowledge. These layers coordinate through structured message passing in a unified heterogeneous graph. This cross-layer alignment ensures information remains aligned with actual task needs. Experiments on four challenging datasets show that D3MAS consistently improves reasoning accuracy by 8.7\% to 15.6\% and reduces knowledge redundancy by 46\% on average.
翻译:基于大语言模型的多智能体系统在协同问题解决方面展现出强大能力。然而,这些系统存在显著的知识冗余问题。智能体在检索与推理过程中重复执行相似任务,其根本原因在于现有架构缺乏确保各操作阶段共享最小充分信息的机制。实证分析表明,智能体间通信的平均知识重复率高达47.3%。本文提出D3MAS(分解、推断与分发)分层协调框架,通过结构化设计而非显式优化来解决冗余问题。该框架通过三个协调层组织协作:任务分解层早期过滤无关子问题,协同推理层捕获跨智能体的互补推理路径,分布式记忆层提供非冗余知识访问。各层级通过统一异构图中的结构化消息传递进行协调,这种跨层对齐机制确保信息始终与实际任务需求保持一致。在四个挑战性数据集上的实验表明,D3MAS持续将推理准确率提升8.7%至15.6%,平均降低46%的知识冗余。