LLMs are typically trained to answer user questions or follow instructions similarly to how human experts respond. However, in the standard alignment framework they lack the basic ability of explicit thinking before answering. Thinking is important for complex questions that require reasoning and planning -- but can be applied to any task. We propose a training method for equipping existing LLMs with such thinking abilities for general instruction following without use of additional human data. We achieve this by an iterative search and optimization procedure that explores the space of possible thought generations, allowing the model to learn how to think without direct supervision. For each instruction, the thought candidates are scored using a judge model to evaluate their responses only, and then optimized via preference optimization. We show that this procedure leads to superior performance on AlpacaEval and Arena-Hard, and shows gains from thinking on non-reasoning categories such as marketing, health and general knowledge, in addition to more traditional reasoning & problem-solving tasks.
翻译:大语言模型通常被训练以类似人类专家的方式回答用户问题或遵循指令。然而,在标准的对齐框架下,它们缺乏在回答前进行显式思维的基本能力。思维对于需要推理和规划的复杂问题至关重要——但可应用于任何任务。我们提出一种训练方法,旨在为现有大语言模型配备此类思维能力,以实现无需额外人类数据的通用指令跟随。我们通过迭代搜索与优化程序实现这一目标,该程序探索可能的思维生成空间,使模型能够在无直接监督的情况下学习如何思考。对于每条指令,思维候选方案通过评判模型仅对其响应进行评分,随后通过偏好优化进行优化。我们证明,该方法在AlpacaEval和Arena-Hard基准上取得了优越性能,并在营销、健康与常识等非推理类别以及更传统的推理与问题解决任务中,均展现出思维带来的性能提升。