We study inference-time reward-guided alignment for generative models. Existing methods often rely on either architecture-specific adaptations or computationally costly inference procedures. We introduce Learnable Chernoff Baselines (LCBs) as a method for efficiently and approximately sampling from the exponentially tilted kernels that arise from KL-regularized reward alignment. Using only black-box sampling access to the pretrained model, LCBs implement a form of rejection sampling with adaptively selected acceptance probabilities, which allows fine-grained control over inference-compute scaling. We establish total-variation guarantees to the ideal aligned model, and demonstrate in both continuous and discrete diffusion settings that LCB sampling closely matches ideal rejection sampling while using substantially fewer queries to the pretrained model.
翻译:本文研究生成模型的推理时奖励引导对齐方法。现有方法通常依赖于特定架构的适配或计算成本高昂的推理过程。我们提出可学习的切尔诺夫基线作为一种高效近似采样方法,用于处理KL正则化奖励对齐中产生的指数倾斜核。仅需对预训练模型进行黑盒采样访问,LCB通过自适应选择接受概率实现了一种拒绝采样形式,从而实现对推理计算规模的细粒度控制。我们建立了与理想对齐模型的总变差保证,并在连续和离散扩散场景中证明,LCB采样在显著减少预训练模型查询次数的同时,能够紧密匹配理想拒绝采样的效果。