Generating accurate step-by-step reasoning is essential for Large Language Models (LLMs) to address complex problems and enhance robustness and interpretability. Despite the flux of research on developing advanced reasoning approaches, systematically analyzing the diverse LLMs and reasoning strategies in generating reasoning chains remains a significant challenge. The difficulties stem from the lack of two key elements: (1) an automatic method for evaluating the generated reasoning chains on different tasks, and (2) a unified formalism and implementation of the diverse reasoning approaches for systematic comparison. This paper aims to close the gap: (1) We introduce AutoRace for fully automated reasoning chain evaluation. Existing metrics rely on expensive human annotations or pre-defined LLM prompts not adaptable to different tasks. In contrast, AutoRace automatically creates detailed evaluation criteria tailored for each task, and uses GPT-4 for accurate evaluation following the criteria. (2) We develop LLM Reasoners, a library for standardized modular implementation of existing and new reasoning algorithms, under a unified formulation of the search, reward, and world model components. With the new evaluation and library, (3) we conduct extensive study of different reasoning approaches (e.g., CoT, ToT, RAP). The analysis reveals interesting findings about different factors contributing to reasoning, including the reward-guidance, breadth-vs-depth in search, world model, and prompt formats, etc.
翻译:生成准确的逐步推理对于大型语言模型(LLMs)解决复杂问题、增强鲁棒性和可解释性至关重要。尽管关于开发先进推理方法的研究不断涌现,但系统分析不同LLM及推理策略在生成推理链方面的表现仍面临重大挑战。这些困难源于两个关键要素的缺失:(1)针对不同任务自动评估生成推理链的方法;(2)用于系统比较的多样化推理方法的统一形式化描述与实现。本文旨在填补这一空白:(1)我们提出AutoRace用于全自动推理链评估。现有指标依赖昂贵的人工标注或预定义的LLM提示,难以适配不同任务。相比之下,AutoRace能自动创建针对每个任务量身定制的详细评估标准,并依据标准使用GPT-4进行精确评估。(2)我们开发了LLM Reasoners库,在搜索、奖励和世界模型组件的统一框架下,对现有及新型推理算法进行标准化模块化实现。基于新的评估方法和工具库,(3)我们对不同推理方法(如CoT、ToT、RAP)开展了广泛研究。分析揭示了影响推理性能的多重因素,包括奖励引导机制、搜索的广度与深度权衡、世界模型构建以及提示格式等。