Large language models are usually fine-tuned to align with human preferences. However, fine-tuning a large language model can be challenging. In this work, we introduce $\textit{weak-to-strong search}$, framing the alignment of a large language model as a test-time greedy search to maximize the log-probability difference between small tuned and untuned models while sampling from the frozen large model. This method serves both as (1) a compute-efficient model up-scaling strategy that avoids directly tuning the large model and as (2) an instance of weak-to-strong generalization that enhances a strong model with weak test-time guidance. Empirically, we demonstrate the flexibility of weak-to-strong search across different tasks. In controlled-sentiment generation and summarization, we use tuned and untuned $\texttt{gpt2}$s to improve the alignment of large models without additional training. Crucially, in a more difficult instruction-following benchmark, AlpacaEval 2.0, we show that reusing off-the-shelf small models (e.g., $\texttt{zephyr-7b-beta}$ and its untuned version) can improve the length-controlled win rates of both white-box and black-box large models against $\texttt{gpt-4-turbo}$ (e.g., $34.4\% \rightarrow 37.9\%$ for $\texttt{Llama-3-70B-Instruct}$ and $16.0\% \rightarrow 20.1\%$ for $\texttt{gpt-3.5-turbo-instruct}$), despite the small models' low win rates $\approx 10.0\%$.
翻译:大语言模型通常通过微调来与人类偏好对齐。然而,直接微调大语言模型可能面临挑战。本文提出$\textit{弱到强搜索}$方法,将大语言模型的对齐问题构建为一种测试时贪婪搜索,其目标是在从冻结的大模型中采样的同时,最大化经过微调的小模型与未微调小模型的对数概率差值。该方法兼具双重作用:(1) 作为一种计算高效的模型扩展策略,避免直接微调大模型;(2) 作为弱到强泛化的一个实例,通过弱测试时指导来增强强模型。我们在实验中证明了弱到强搜索在不同任务上的灵活性。在控制情感生成和摘要任务中,我们使用微调与未微调的$\texttt{gpt2}$模型来改进大模型的对齐效果,且无需额外训练。关键在于,在更具挑战性的指令遵循基准AlpacaEval 2.0上,我们证明重用现成的小模型(例如$\texttt{zephyr-7b-beta}$及其未微调版本)能够提升白盒与黑盒大模型相对于$\texttt{gpt-4-turbo}$的长度控制胜率(例如$\texttt{Llama-3-70B-Instruct}$从$34.4\%$提升至$37.9\%$,$\texttt{gpt-3.5-turbo-instruct}$从$16.0\%$提升至$20.1\%$),尽管这些小模型自身的胜率较低(约$10.0\%$)。