To improve the performance of large language models (LLMs), researchers have explored providing LLMs with textual task-solving experience via prompts. However, they rely on manual efforts to acquire and apply such experience for each task, which is not feasible for the growing demand for LLMs and the variety of user questions. To address this issue, we design a lifelong autonomous experiential learning framework based on LLMs to explore whether LLMs can imitate human ability for learning and utilizing experience. It autonomously learns and accumulates experience through experience transfer and induction, categorizing the types of input questions to select which accumulated experience to employ for them. Experimental results on six widely used NLP datasets show that our framework performs reliably in each intermediate step and effectively improves the performance of GPT-3.5 and GPT-4. This validates the feasibility of using LLMs to mimic human experiential learning and application capabilities. Additionally, we provide a detailed analysis of the behavior of our framework at each step.
翻译:为提升大型语言模型(LLM)的性能,研究者已尝试通过提示词为LLM提供文本任务解决经验。然而,现有方法依赖人工为每个任务获取并应用此类经验,这难以应对日益增长的LLM需求与多样化的用户问题。为解决该问题,我们设计了一种基于LLM的终身自主经验学习框架,以探索LLM能否模仿人类学习和运用经验的能力。该框架通过经验迁移与归纳自主学习和积累经验,并对输入问题类型进行分类以选择调用何种已积累经验。在六个广泛使用的NLP数据集上的实验结果表明,我们的框架在每一中间步骤均表现可靠,并能有效提升GPT-3.5与GPT-4的性能。这验证了利用LLM模拟人类经验学习与应用能力的可行性。此外,我们对框架在各步骤中的行为进行了详细分析。