Large language models (LLMs) can be improved by aligning with human preferences through fine-tuning -- the so-called reinforcement learning from human feedback (RLHF). However, the cost of fine-tuning an LLM is prohibitive for many users. Due to their ability to bypass LLM fine-tuning, prediction-time tokenwise reward-guided text generation (RGTG) methods have recently been proposed. They use a reward model trained on full sequences to score partial sequences during decoding in a bid to steer the generation towards sequences with high rewards. However, these methods have so far been only heuristically motivated and poorly analyzed. In this work, we show that reward models trained on full sequences are not compatible with scoring partial sequences. To alleviate this issue, we propose to train a Bradley-Terry reward model on partial sequences explicitly, and autoregressively sample from the implied tokenwise policy during decoding time. We study the properties of this reward model and the resulting policy: we show that this policy is proportional to the ratio of two distinct RLHF policies. Our simple approach outperforms previous RGTG methods and performs similarly to strong offline baselines without large-scale LLM finetuning.
翻译:大型语言模型(LLM)可通过基于人类偏好的微调进行改进——即所谓的人类反馈强化学习(RLHF)。然而,对LLM进行微调的成本对许多用户而言是难以承受的。由于能够绕过LLM微调过程,基于预测时词元奖励引导的文本生成(RGTG)方法近期被提出。这类方法利用在全序列上训练的奖励模型对解码过程中的局部序列进行评分,以期引导生成过程获得高奖励的序列。然而,这些方法目前仅具有启发式动机且缺乏充分分析。本研究表明,在全序列上训练的奖励模型与局部序列评分机制并不兼容。为缓解该问题,我们提出在局部序列上显式训练布拉德利-特里奖励模型,并在解码时通过自回归采样实现隐含的词元级策略。我们研究了该奖励模型及其生成策略的特性:证明该策略与两种不同RLHF策略的概率比成正比。这种简单方法超越了以往的RGTG方法,且在不进行大规模LLM微调的情况下,其性能与强离线基线模型相当。