In this work, we target the task of text-driven style transfer in the context of text-to-image (T2I) diffusion models. The main challenge is consistent structure preservation while enabling effective style transfer effects. The past approaches in this field directly concatenate the content and style prompts for a prompt-level style injection, leading to unavoidable structure distortions. In this work, we propose a novel solution to the text-driven style transfer task, namely, Adaptive Style Incorporation~(ASI), to achieve fine-grained feature-level style incorporation. It consists of the Siamese Cross-Attention~(SiCA) to decouple the single-track cross-attention to a dual-track structure to obtain separate content and style features, and the Adaptive Content-Style Blending (AdaBlending) module to couple the content and style information from a structure-consistent manner. Experimentally, our method exhibits much better performance in both structure preservation and stylized effects.
翻译:本研究针对文本到图像(T2I)扩散模型中的文本驱动风格迁移任务展开。核心挑战在于实现有效风格迁移效果的同时保持结构一致性。该领域现有方法通常直接将内容与风格提示词拼接以实现提示词层面的风格注入,这不可避免地导致结构失真。本文提出一种面向文本驱动风格迁移任务的新颖解决方案——自适应风格融合(Adaptive Style Incorporation, ASI),以实现细粒度的特征层面风格融合。该方法包含孪生交叉注意力(Siamese Cross-Attention, SiCA)模块与自适应内容-风格混合(Adaptive Content-Style Blending, AdaBlending)模块:SiCA通过将单路交叉注意力解耦为双路结构以分离内容与风格特征;AdaBlending则以结构一致的方式耦合内容与风格信息。实验表明,本方法在结构保持与风格化效果方面均展现出更优性能。