Inference-time computation methods enhance the performance of Large Language Models (LLMs) by leveraging additional computational resources to achieve superior results. Common techniques, such as Best-of-N sampling, Majority Voting, and variants of tree-search algorithms have proven to be effective in boosting the performance of LLMs. These approaches strategically trade increased computational resources for improved model responses. In this work, we proposed DARWIN, an inference-time alignment method that leverages the guidance of a reward model to achieve alignment through a reward-guided tree search. Empirical evidences indicates that our method outperforms other inference-time alignment methods such as Best-of-N and ARGS on two widely accepted alignment benchmarks AlpacaEval 2 and MT-Bench. Furthermore, we show that our inference-time approach achieves performance comparable to preference-tuned models on both benchmarks, highlighting the effectiveness of trading inference-time compute for enhanced performance during inference. We have released our codes at https://github.com/declare-lab/darwin.
翻译:推理时计算方法通过利用额外的计算资源来提升大型语言模型(LLMs)的性能,从而获得更优的结果。常见技术,如最佳N采样、多数投票以及各类树搜索算法的变体,已被证明能有效提升LLMs的性能。这些方法策略性地以增加计算资源为代价来改进模型响应。在本工作中,我们提出了DARWIN,一种推理时对齐方法,该方法利用奖励模型的引导,通过奖励引导的树搜索实现对齐。实证证据表明,我们的方法在两个广泛认可的对齐基准测试AlpacaEval 2和MT-Bench上,优于其他推理时对齐方法,如最佳N采样和ARGS。此外,我们证明了我们的推理时方法在这两个基准测试上取得了与偏好调优模型相当的性能,突显了在推理阶段以计算资源换取性能提升的有效性。我们已在https://github.com/declare-lab/darwin发布了相关代码。