Personalized text-to-image models allow users to generate varied styles of images (specified with a sentence) for an object (specified with a set of reference images). While remarkable results have been achieved using diffusion-based generation models, the visual structure and details of the object are often unexpectedly changed during the diffusion process. One major reason is that these diffusion-based approaches typically adopt a simple reconstruction objective during training, which can hardly enforce appropriate structural consistency between the generated and the reference images. To this end, in this paper, we design a novel reinforcement learning framework by utilizing the deterministic policy gradient method for personalized text-to-image generation, with which various objectives, differential or even non-differential, can be easily incorporated to supervise the diffusion models to improve the quality of the generated images. Experimental results on personalized text-to-image generation benchmark datasets demonstrate that our proposed approach outperforms existing state-of-the-art methods by a large margin on visual fidelity while maintaining text-alignment. Our code is available at: \url{https://github.com/wfanyue/DPG-T2I-Personalization}.
翻译:个性化文本到图像模型允许用户为一个对象(通过一组参考图像指定)生成多种风格的图像(通过一个句子指定)。尽管基于扩散的生成模型已取得显著成果,但在扩散过程中,对象的视觉结构和细节常常会发生意想不到的改变。一个主要原因是这些基于扩散的方法通常在训练时采用简单的重建目标,这很难强制生成图像与参考图像之间保持恰当的结构一致性。为此,本文设计了一种新颖的强化学习框架,利用确定性策略梯度方法进行个性化文本到图像生成。通过该框架,可以轻松整合各种目标函数(可微甚至不可微的)来监督扩散模型,从而提高生成图像的质量。在个性化文本到图像生成基准数据集上的实验结果表明,我们提出的方法在保持文本对齐的同时,在视觉保真度上大幅超越了现有的最先进方法。我们的代码发布于:\url{https://github.com/wfanyue/DPG-T2I-Personalization}。