We introduce Buffer of Thoughts (BoT), a novel and versatile thought-augmented reasoning approach for enhancing accuracy, efficiency and robustness of large language models (LLMs). Specifically, we propose meta-buffer to store a series of informative high-level thoughts, namely thought-template, distilled from the problem-solving processes across various tasks. Then for each problem, we retrieve a relevant thought-template and adaptively instantiate it with specific reasoning structures to conduct efficient reasoning. To guarantee the scalability and stability, we further propose buffer-manager to dynamically update the meta-buffer, thus enhancing the capacity of meta-buffer as more tasks are solved. We conduct extensive experiments on 10 challenging reasoning-intensive tasks, and achieve significant performance improvements over previous SOTA methods: 11% on Game of 24, 20% on Geometric Shapes and 51% on Checkmate-in-One. Further analysis demonstrate the superior generalization ability and model robustness of our BoT, while requiring only 12% of the cost of multi-query prompting methods (e.g., tree/graph of thoughts) on average. Notably, we find that our Llama3-8B+BoT has the potential to surpass Llama3-70B model. Our project is available at: https://github.com/YangLing0818/buffer-of-thought-llm
翻译:本文提出思维缓存(BoT),一种新颖且通用的思维增强推理方法,用于提升大型语言模型(LLMs)的准确性、效率与鲁棒性。具体而言,我们设计元缓存模块来存储一系列从多任务求解过程中提炼出的高层次信息化思维单元,即思维模板。针对每个具体问题,我们检索相关的思维模板,并通过自适应实例化特定推理结构来执行高效推理。为确保可扩展性与稳定性,我们进一步提出缓存管理器来动态更新元缓存,从而随着任务解决数量的增加持续增强元缓存的能力。我们在10项具有挑战性的推理密集型任务上进行了广泛实验,相比先前最优方法取得了显著性能提升:在24点游戏中提升11%,在几何图形任务中提升20%,在一步将死任务中提升51%。进一步分析表明,我们的BoT方法具有优异的泛化能力和模型鲁棒性,且平均仅需多轮查询提示方法(如思维树/图)12%的成本。值得注意的是,我们发现Llama3-8B+BoT模型有潜力超越Llama3-70B模型。本项目开源地址:https://github.com/YangLing0818/buffer-of-thought-llm