Existing methods for preference tuning of text-to-image (T2I) diffusion models often rely on computationally expensive generation steps to create positive and negative pairs of images. These approaches frequently yield training pairs that either lack meaningful differences, are expensive to sample and filter, or exhibit significant variance in irrelevant pixel regions, thereby degrading training efficiency. To address these limitations, we introduce "Di3PO", a novel method for constructing positive and negative pairs that isolates specific regions targeted for improvement during preference tuning, while keeping the surrounding context in the image stable. We demonstrate the efficacy of our approach by applying it to the challenging task of text rendering in diffusion models, showcasing improvements over baseline methods of SFT and DPO.
翻译:现有针对文本到图像(T2I)扩散模型的偏好调优方法通常依赖于计算成本高昂的生成步骤来创建正负图像对。这些方法产生的训练对往往存在以下问题:要么缺乏有意义的差异,要么采样和筛选成本高昂,要么在无关像素区域表现出显著方差,从而降低了训练效率。为解决这些局限性,我们提出了"Di3PO"——一种构建正负图像对的新方法,该方法能在偏好调优过程中隔离出需要改进的特定区域,同时保持图像中周围背景的稳定。我们通过将该方法应用于扩散模型中极具挑战性的文本渲染任务,展示了其有效性,并证明了其在SFT和DPO基线方法基础上的改进。