Today's generative models thrive with large amounts of supervised data and informative reward functions characterizing the quality of the generation. They work under the assumptions that the supervised data provides knowledge to pre-train the model, and the reward function provides dense information about how to further improve the generation quality and correctness. However, in the hardest instances of important problems, two problems arise: (1) the base generative model attains a near-zero reward signal, and (2) calls to the reward oracle are expensive. This setting poses a fundamentally different learning challenge than standard reward-based post-training. To address this, we propose BaNEL (Bayesian Negative Evidence Learning), an algorithm that post-trains the model using failed attempts only, while minimizing the number of reward evaluations (NREs). Our method is based on the idea that the problem of learning regularities underlying failures can be cast as another, in-loop generative modeling problem. We then leverage this model to assess whether new data resembles previously seen failures and steer the generation away from them. We show that BaNEL can improve model performance without observing a single successful sample on several sparse-reward tasks, outperforming existing novelty-bonus approaches by up to several orders of magnitude in success rate, while using fewer reward evaluations.
翻译:当今的生成模型在大量监督数据和表征生成质量的信息化奖励函数下蓬勃发展。它们基于以下假设运行:监督数据为模型预训练提供知识,而奖励函数则提供关于如何进一步提升生成质量和正确性的密集信息。然而,在重要问题的最困难实例中,会出现两个问题:(1) 基础生成模型获得的奖励信号趋近于零;(2) 调用奖励函数的代价高昂。这种设定提出了与标准基于奖励的后训练根本不同的学习挑战。为解决此问题,我们提出BaNEL(贝叶斯负证据学习),一种仅使用失败尝试对模型进行后训练,同时最小化奖励函数调用次数的算法。我们的方法基于以下思想:学习失败背后规律的问题可以被视为另一个循环内的生成建模问题。我们随后利用该模型评估新数据是否与已观察到的失败相似,并引导生成过程远离这些失败模式。我们证明,在多个稀疏奖励任务中,BaNEL能够在未观察到任何成功样本的情况下提升模型性能,其成功率比现有新颖性奖励方法高出数个数量级,同时使用更少的奖励函数调用次数。