Large language models are usually fine-tuned to align with human preferences. However, fine-tuning a large language model can be challenging. In this work, we introduce $\textit{weak-to-strong search}$, framing the alignment of a large language model as a test-time greedy search to maximize the log-probability difference between small tuned and untuned models while sampling from the frozen large model. This method serves both as (1) a compute-efficient model up-scaling strategy that avoids directly tuning the large model and as (2) an instance of weak-to-strong generalization that enhances a strong model with weak test-time guidance. Empirically, we demonstrate the flexibility of weak-to-strong search across different tasks. In controlled-sentiment generation and summarization, we use tuned and untuned $\texttt{gpt2}$s to improve the alignment of large models without additional training. Crucially, in a more difficult instruction-following benchmark, AlpacaEval 2.0, we show that reusing off-the-shelf small models (e.g., $\texttt{zephyr-7b-beta}$ and its untuned version) can improve the length-controlled win rates of both white-box and black-box large models against $\texttt{gpt-4-turbo}$ (e.g., $34.4\% \rightarrow 37.9\%$ for $\texttt{Llama-3-70B-Instruct}$ and $16.0\% \rightarrow 20.1\%$ for $\texttt{gpt-3.5-turbo-instruct}$), despite the small models' low win rates $\approx 10.0\%$.
翻译:大语言模型通常通过微调来与人类偏好对齐。然而,微调一个大语言模型可能具有挑战性。在这项工作中,我们引入了 $\textit{弱到强搜索}$,将对齐大语言模型的问题构建为一种测试时贪婪搜索,其目标是在从冻结的大模型中进行采样时,最大化经过微调的小模型与未经微调的小模型之间的对数概率差。该方法兼具双重作用:(1) 作为一种计算高效的模型放大策略,避免了直接微调大模型;(2) 作为弱到强泛化的一个实例,通过弱测试时指导来增强强模型。我们在实验中证明了弱到强搜索在不同任务上的灵活性。在受控情感生成和摘要任务中,我们使用经过微调和未经微调的 $\texttt{gpt2}$ 模型来改进大模型的对齐效果,而无需额外训练。关键的是,在一个更困难的指令跟随基准测试 AlpacaEval 2.0 中,我们表明,重用现成的小模型(例如 $\texttt{zephyr-7b-beta}$ 及其未经微调的版本)可以提高白盒和黑盒大模型相对于 $\texttt{gpt-4-turbo}$ 的长度控制胜率(例如,$\texttt{Llama-3-70B-Instruct}$ 从 $34.4\%$ 提升至 $37.9\%$,$\texttt{gpt-3.5-turbo-instruct}$ 从 $16.0\%$ 提升至 $20.1\%$),尽管这些小模型的胜率较低(约 $10.0\%$)。