Style control has been popular in video generation models. Existing methods often generate videos far from the given style, cause content leakage, and struggle to transfer one video to the desired style. Our first observation is that the style extraction stage matters, whereas existing methods emphasize global style but ignore local textures. In order to bring texture features while preventing content leakage, we filter content-related patches while retaining style ones based on prompt-patch similarity; for global style extraction, we generate a paired style dataset through model illusion to facilitate contrastive learning, which greatly enhances the absolute style consistency. Moreover, to fill in the image-to-video gap, we train a lightweight motion adapter on still videos, which implicitly enhances stylization extent, and enables our image-trained model to be seamlessly applied to videos. Benefited from these efforts, our approach, StyleMaster, not only achieves significant improvement in both style resemblance and temporal coherence, but also can easily generalize to video style transfer with a gray tile ControlNet. Extensive experiments and visualizations demonstrate that StyleMaster significantly outperforms competitors, effectively generating high-quality stylized videos that align with textual content and closely resemble the style of reference images. Our project page is at https://zixuan-ye.github.io/stylemaster
翻译:风格控制在视频生成模型中已广受关注。现有方法常生成与给定风格差异较大的视频,存在内容泄露问题,且难以将视频转换至目标风格。我们首先观察到风格提取阶段至关重要,而现有方法侧重全局风格却忽略了局部纹理。为在引入纹理特征的同时避免内容泄露,我们基于提示-图像块相似度筛选内容相关块并保留风格相关块;针对全局风格提取,我们通过模型幻觉生成配对的风格数据集以促进对比学习,从而显著提升绝对风格一致性。此外,为弥合图像到视频的鸿沟,我们在静态视频上训练轻量级运动适配器,该设计隐式增强了风格化程度,并使我们的图像训练模型能够无缝应用于视频。得益于这些设计,我们的方法StyleMaster不仅在风格相似性与时序连贯性上取得显著提升,还能通过灰度控制网络轻松泛化至视频风格迁移任务。大量实验与可视化结果表明,StyleMaster显著优于现有方法,能有效生成与文本内容对齐且高度贴合参考图像风格的高质量风格化视频。项目页面详见 https://zixuan-ye.github.io/stylemaster