We introduce Style Tailoring, a recipe to finetune Latent Diffusion Models (LDMs) in a distinct domain with high visual quality, prompt alignment and scene diversity. We choose sticker image generation as the target domain, as the images significantly differ from photorealistic samples typically generated by large-scale LDMs. We start with a competent text-to-image model, like Emu, and show that relying on prompt engineering with a photorealistic model to generate stickers leads to poor prompt alignment and scene diversity. To overcome these drawbacks, we first finetune Emu on millions of sticker-like images collected using weak supervision to elicit diversity. Next, we curate human-in-the-loop (HITL) Alignment and Style datasets from model generations, and finetune to improve prompt alignment and style alignment respectively. Sequential finetuning on these datasets poses a tradeoff between better style alignment and prompt alignment gains. To address this tradeoff, we propose a novel fine-tuning method called Style Tailoring, which jointly fits the content and style distribution and achieves best tradeoff. Evaluation results show our method improves visual quality by 14%, prompt alignment by 16.2% and scene diversity by 15.3%, compared to prompt engineering the base Emu model for stickers generation.
翻译:我们提出风格定制方法,这是一种在特定领域微调潜在扩散模型(LDMs)的方案,旨在实现高视觉质量、提示对齐和场景多样性。我们选择贴纸图像生成作为目标领域,因为这类图像与大规模LDMs通常生成的照片级真实感样本存在显著差异。我们从Emu等高性能文本到图像模型出发,证明依赖照片级真实感模型通过提示工程生成贴纸会导致提示对齐和场景多样性不足。为克服这些缺陷,我们首先使用弱监督收集的数百万张类贴纸图像对Emu进行微调以激发多样性。接着,我们从模型生成结果中构建人机协同对齐数据集和风格数据集,并分别通过微调提升提示对齐和风格对齐能力。在这些数据集上的顺序微调会引发风格对齐与提示对齐效果间的权衡问题。为解决这一矛盾,我们提出名为风格定制的新型微调方法,该方法能同时拟合内容与风格分布,实现最佳权衡效果。评估结果表明,相较于对基础Emu模型进行贴纸生成的提示工程,我们的方法将视觉质量提升14%,提示对齐度提升16.2%,场景多样性提升15.3%。