The safe and effective deployment of Large Language Models (LLMs) involves a critical step called alignment, which ensures that the model's responses are in accordance with human preferences. Prevalent alignment techniques, such as DPO, PPO and their variants, align LLMs by changing the pre-trained model weights during a phase called post-training. While predominant, these post-training methods add substantial complexity before LLMs can be deployed. Inference-time alignment methods avoid the complex post-training step and instead bias the generation towards responses that are aligned with human preferences. The best-known inference-time alignment method, called Best-of-N, is as effective as the state-of-the-art post-training procedures. Unfortunately, Best-of-N requires vastly more resources at inference time than standard decoding strategies, which makes it computationally not viable. In this work, we introduce Speculative Rejection, a computationally-viable inference-time alignment algorithm. It generates high-scoring responses according to a given reward model, like Best-of-N does, while being between 16 to 32 times more computationally efficient.
翻译:大型语言模型(LLM)的安全有效部署涉及一个称为对齐的关键步骤,该步骤确保模型的响应符合人类偏好。主流的对齐技术(如DPO、PPO及其变体)通过在训练后阶段改变预训练模型权重来实现LLM的对齐。尽管这些训练后方法占主导地位,但它们显著增加了LLM部署前的复杂性。推理时对齐方法避免了复杂的训练后步骤,转而将生成结果偏向于符合人类偏好的响应。最著名的推理时对齐方法——N选优法,其效果与最先进的训练后程序相当。然而,N选优法在推理时所需的资源远多于标准解码策略,这使得其在计算上不可行。本研究提出了一种计算可行的推理时对齐算法——推测性拒绝。该算法能根据给定的奖励模型生成高评分响应(与N选优法相同),同时计算效率提升16至32倍。