Recent advances in large language models (LLMs) have substantially accelerated the development of embodied agents. LLM-based multi-agent systems mitigate the inefficiency of single agents in complex tasks. However, they still suffer from issues such as memory inconsistency and agent behavioral conflicts. To address these challenges, we propose MiTa, a hierarchical memory-integrated task allocative framework to enhance collaborative efficiency. MiTa organizes agents into a manager-member hierarchy, where the manager incorporates additional allocation and summary modules that enable (1) global task allocation and (2) episodic memory integration. The allocation module enables the manager to allocate tasks from a global perspective, thereby avoiding potential inter-agent conflicts. The summary module, triggered by task progress updates, performs episodic memory integration by condensing recent collaboration history into a concise summary that preserves long-horizon context. By combining task allocation with episodic memory, MiTa attains a clearer understanding of the task and facilitates globally consistent task distribution. Experimental results confirm that MiTa achieves superior efficiency and adaptability in complex multi-agent cooperation over strong baseline methods.
翻译:近年来,大语言模型(LLM)的显著进展极大地推动了具身智能体的发展。基于LLM的多智能体系统缓解了单一智能体在复杂任务中的低效问题。然而,这些系统仍面临记忆不一致与智能体行为冲突等问题。为应对这些挑战,我们提出了MiTa,一种层次化的记忆集成任务分配框架,以提升协作效率。MiTa将智能体组织为管理者-成员层次结构,其中管理者整合了额外的分配与摘要模块,使其能够实现(1)全局任务分配与(2)情景记忆集成。分配模块使管理者能够从全局视角分配任务,从而避免潜在的智能体间冲突。摘要模块由任务进度更新触发,通过将最近的协作历史压缩为保留长程上下文的简洁摘要,实现情景记忆集成。通过将任务分配与情景记忆相结合,MiTa能够更清晰地理解任务,并促进全局一致的任务分发。实验结果证实,相较于强大的基线方法,MiTa在复杂的多智能体协作中实现了更优的效率与适应性。