Large language models (LLMs) have demonstrated impressive capability in reasoning and planning when integrated with tree-search-based prompting methods. However, since these methods ignore the previous search experiences, they often make the same mistakes in the search process. To address this issue, we introduce Reflection on search Trees (RoT), an LLM reflection framework designed to improve the performance of tree-search-based prompting methods. It uses a strong LLM to summarize guidelines from previous tree search experiences to enhance the ability of a weak LLM. The guidelines are instructions about solving this task through tree search which can prevent the weak LLMs from making similar mistakes in the past search process. In addition, we proposed a novel state selection method, which identifies the critical information from historical search processes to help RoT generate more specific and meaningful guidelines. In our extensive experiments, we find that RoT significantly improves the performance of LLMs in reasoning or planning tasks with various tree-search-based prompting methods (e.g., BFS and MCTS). Non-tree-search-based prompting methods such as Chain-of-Thought (CoT) can also benefit from RoT guidelines since RoT can provide task-specific knowledge collected from the search experience.
翻译:大型语言模型(LLMs)在与基于树搜索的提示方法结合时,展现了在推理和规划方面的出色能力。然而,由于这些方法忽略了先前的搜索经验,它们常会在搜索过程中重复相同的错误。为解决这一问题,我们提出了搜索树反思(Reflection on Search Trees, RoT)框架,这是一种用于提升基于树搜索提示方法性能的LLM反思框架。该框架利用一个强LLM从先前的树搜索经验中总结指导方针,从而增强弱LLM的能力。这些指导方针是针对通过树搜索解决当前任务的指令,可防止弱LLM在后续搜索过程中重蹈覆辙。此外,我们提出了一种新颖的状态选择方法,从历史搜索过程中识别关键信息,帮助RoT生成更具针对性和有意义的指导方针。在大量实验中,我们发现RoT能显著提升LLM在推理或规划任务上的表现,且适用于多种基于树搜索的提示方法(如广度优先搜索和蒙特卡洛树搜索)。即使是非树搜索提示方法(如思维链,Chain-of-Thought, CoT),也能因RoT提供的从搜索经验中收集的任务特定知识而受益。