Diffusion models have become the de-facto approach for generating visual data, which are trained to match the distribution of the training dataset. In addition, we also want to control generation to fulfill desired properties such as alignment to a text description, which can be specified with a black-box reward function. Prior works fine-tune pretrained diffusion models to achieve this goal through reinforcement learning-based algorithms. Nonetheless, they suffer from issues including slow credit assignment as well as low quality in their generated samples. In this work, we explore techniques that do not directly maximize the reward but rather generate high-reward images with relatively high probability -- a natural scenario for the framework of generative flow networks (GFlowNets). To this end, we propose the Diffusion Alignment with GFlowNet (DAG) algorithm to post-train diffusion models with black-box property functions. Extensive experiments on Stable Diffusion and various reward specifications corroborate that our method could effectively align large-scale text-to-image diffusion models with given reward information.
翻译:扩散模型已成为生成视觉数据的事实标准方法,其训练目标是匹配训练数据集的分布。此外,我们也希望控制生成过程以满足所需属性,例如与文本描述的对齐,这可以通过黑盒奖励函数来指定。先前的研究通过基于强化学习的算法对预训练的扩散模型进行微调以实现这一目标。然而,这些方法存在信用分配缓慢以及生成样本质量较低等问题。在本工作中,我们探索了不直接最大化奖励,而是以相对较高的概率生成高奖励图像的技术——这恰好是生成流网络框架的自然应用场景。为此,我们提出了基于GFlowNet的扩散对齐算法,利用黑盒属性函数对扩散模型进行后训练。在Stable Diffusion及多种奖励设定上进行的大量实验证实,我们的方法能够有效地将大规模文本到图像扩散模型与给定的奖励信息对齐。