Traditionally, style has been primarily considered in terms of artistic elements such as colors, brushstrokes, and lighting. However, identical semantic subjects, like people, boats, and houses, can vary significantly across different artistic traditions, indicating that style also encompasses the underlying semantics. Therefore, in this study, we propose a zero-shot scheme for image variation with coordinated semantics. Specifically, our scheme transforms the image-to-image problem into an image-to-text-to-image problem. The image-to-text operation employs vision-language models e.g., BLIP) to generate text describing the content of the input image, including the objects and their positions. Subsequently, the input style keyword is elaborated into a detailed description of this style and then merged with the content text using the reasoning capabilities of ChatGPT. Finally, the text-to-image operation utilizes a Diffusion model to generate images based on the text prompt. To enable the Diffusion model to accommodate more styles, we propose a fine-tuning strategy that injects text and style constraints into cross-attention. This ensures that the output image exhibits similar semantics in the desired style. To validate the performance of the proposed scheme, we constructed a benchmark comprising images of various styles and scenes and introduced two novel metrics. Despite its simplicity, our scheme yields highly plausible results in a zero-shot manner, particularly for generating stylized images with high-fidelity semantics.
翻译:传统上,风格主要被视为色彩、笔触和光影等艺术元素的组合。然而,相同的语义主体(如人物、船只、房屋)在不同艺术传统中可能呈现显著差异,这表明风格亦涵盖底层语义。因此,本研究提出一种基于协调语义的零样本图像变体生成方案。具体而言,该方案将图像到图像的转换问题转化为图像到文本再到图像的问题。图像到文本阶段采用视觉语言模型(如BLIP)生成描述输入图像内容的文本,包括对象及其位置信息。随后,通过ChatGPT的推理能力,将输入的风格关键词扩展为该风格的详细描述,并与内容文本进行融合。最后,文本到图像阶段利用扩散模型基于文本提示生成图像。为使扩散模型适配更多风格,我们提出一种微调策略,将文本与风格约束注入交叉注意力机制,确保输出图像在目标风格下保持相似的语义特征。为验证所提方案的性能,我们构建了包含多风格多场景图像的基准数据集,并引入两项新颖评估指标。尽管方案结构简洁,其以零样本方式产生了高度可信的结果,尤其在生成具有高保真语义的风格化图像方面表现突出。