The adaptation of Large Language Model (LLM)-based agents to execute tasks via natural language prompts represents a significant advancement, notably eliminating the need for explicit retraining or fine tuning, but are constrained by the comprehensiveness and diversity of the provided examples, leading to outputs that often diverge significantly from expected results, especially when it comes to the open-ended questions. This paper introduces the Memory Sharing, a framework which integrates the real-time memory filter, storage and retrieval to enhance the In-Context Learning process. This framework allows for the sharing of memories among multiple agents, whereby the interactions and shared memories between different agents effectively enhance the diversity of the memories. The collective self-enhancement through interactive learning among multiple agents facilitates the evolution from individual intelligence to collective intelligence. Besides, the dynamically growing memory pool is utilized not only to improve the quality of responses but also to train and enhance the retriever. We evaluated our framework across three distinct domains involving specialized tasks of agents. The experimental results demonstrate that the MS framework significantly improves the agents' performance in addressing open-ended questions.
翻译:基于大语言模型(LLM)的智能体通过自然语言提示执行任务的能力代表了重要进展,其显著优势在于无需显式重新训练或微调。然而,该方法受限于所提供示例的全面性与多样性,导致输出结果常与预期显著偏离,尤其在处理开放式问题时更为明显。本文提出记忆共享框架,该框架通过集成实时记忆过滤、存储与检索机制来增强上下文学习过程。该框架支持多智能体间的记忆共享,不同智能体间的交互与共享记忆有效提升了记忆的多样性。通过多智能体交互学习实现的集体自我增强,促进了从个体智能到集体智能的演进。此外,动态增长的记忆池不仅用于提升响应质量,还可用于训练和增强检索器。我们在涉及智能体专项任务的三个不同领域对该框架进行了评估。实验结果表明,记忆共享框架能显著提升智能体处理开放式问题的性能。