Large language models (LLMs) have shown impressive success in various applications. However, these models are often not well aligned with human intents, which calls for additional treatments on them; that is, the alignment problem. To make LLMs better follow user instructions, existing alignment methods primarily focus on further training them. However, the extra training of LLMs is usually expensive in terms of GPU computing; even worse, some LLMs are not accessible for user-demanded training, such as GPTs. In this work, we take a different perspective -- Black-Box Prompt Optimization (BPO) -- to perform alignments. The idea is to optimize user prompts to suit LLMs' input understanding, so as to best realize users' intents without updating LLMs' parameters. BPO leverages human preferences to optimize prompts, thus making it superior to LLM (e.g., ChatGPT) as a prompt engineer. Moreover, BPO is model-agnostic, and the empirical results demonstrate that the BPO-aligned ChatGPT yields a 22% increase in the win rate against its original version and 10% for GPT-4. Notably, the BPO-aligned LLMs can outperform the same models aligned by PPO and DPO, and it also brings additional performance gains when combining BPO with PPO or DPO. Code and datasets are released at https://github.com/thu-coai/BPO.
翻译:大型语言模型(LLMs)在各种应用中展现出令人瞩目的成功。然而,这些模型往往未能充分符合人类意图,因此需要对其进行额外处理,即对齐问题。为使LLMs更好地遵循用户指令,现有对齐方法主要侧重于对模型进行进一步训练。但LLMs的额外训练通常需要消耗大量GPU计算资源;更甚者,部分LLMs(如GPT系列)不支持用户要求的训练。本研究提出一种全新视角——黑盒提示优化(BPO)——来实现模型对齐。其核心思想是通过优化用户提示以适应LLMs的输入理解机制,从而在不更新模型参数的前提下最大程度实现用户意图。BPO利用人类偏好优化提示,使其作为提示工程师优于LLM(如ChatGPT)。此外,BPO具有模型无关性,实证结果表明:经BPO对齐的ChatGPT在胜率上较原始版本提升22%,GPT-4提升10%。值得注意的是,经BPO对齐的LLMs在性能上可超越通过PPO和DPO对齐的同模型,且当BPO与PPO或DPO结合时能带来额外的性能增益。代码与数据集已发布于https://github.com/thu-coai/BPO。