This work focuses on generating high-quality images with specific style of reference images and content of provided textual descriptions. Current leading algorithms, i.e., DreamBooth and LoRA, require fine-tuning for each style, leading to time-consuming and computationally expensive processes. In this work, we propose StyleAdapter, a unified stylized image generation model capable of producing a variety of stylized images that match both the content of a given prompt and the style of reference images, without the need for per-style fine-tuning. It introduces a two-path cross-attention (TPCA) module to separately process style information and textual prompt, which cooperate with a semantic suppressing vision model (SSVM) to suppress the semantic content of style images. In this way, it can ensure that the prompt maintains control over the content of the generated images, while also mitigating the negative impact of semantic information in style references. This results in the content of the generated image adhering to the prompt, and its style aligning with the style references. Besides, our StyleAdapter can be integrated with existing controllable synthesis methods, such as T2I-adapter and ControlNet, to attain a more controllable and stable generation process. Extensive experiments demonstrate the superiority of our method over previous works.
翻译:本研究聚焦于生成具有参考图像特定风格与文本描述内容的高质量图像。当前主流算法(如DreamBooth和LoRA)需针对每种风格进行微调,导致过程耗时且计算成本高昂。本文提出StyleAdapter,一种统一的风格化图像生成模型,能够生成符合给定提示词内容与参考图像风格的多样化风格化图像,而无需进行逐风格微调。该模型引入双路径交叉注意力模块,分别处理风格信息与文本提示,并与语义抑制视觉模型协同工作以抑制风格图像的语义内容。通过这种方式,可确保提示词对生成图像内容保持控制,同时减轻风格参考中语义信息的负面影响,从而使生成图像的内容遵循提示词,其风格与风格参考保持一致。此外,我们的StyleAdapter可与现有可控合成方法(如T2I-adapter和ControlNet)集成,以实现更可控、更稳定的生成过程。大量实验证明了本方法相较于先前工作的优越性。