We present a lighting-aware image editing pipeline that, given a portrait image and a text prompt, performs single image relighting. Our model modifies the lighting and color of both the foreground and background to align with the provided text description. The unbounded nature in creativeness of a text allows us to describe the lighting of a scene with any sensory features including temperature, emotion, smell, time, and so on. However, the modeling of such mapping between the unbounded text and lighting is extremely challenging due to the lack of dataset where there exists no scalable data that provides large pairs of text and relighting, and therefore, current text-driven image editing models does not generalize to lighting-specific use cases. We overcome this problem by introducing a novel data synthesis pipeline: First, diverse and creative text prompts that describe the scenes with various lighting are automatically generated under a crafted hierarchy using a large language model (*e.g.,* ChatGPT). A text-guided image generation model creates a lighting image that best matches the text. As a condition of the lighting images, we perform image-based relighting for both foreground and background using a single portrait image or a set of OLAT (One-Light-at-A-Time) images captured from lightstage system. Particularly for the background relighting, we represent the lighting image as a set of point lights and transfer them to other background images. A generative diffusion model learns the synthesized large-scale data with auxiliary task augmentation (*e.g.,* portrait delighting and light positioning) to correlate the latent text and lighting distribution for text-guided portrait relighting.
翻译:我们提出了一种光照感知的图像编辑流程,能够根据输入的人像图像和文本提示,实现单图像重光照。我们的模型修改前景和背景的光照与颜色,使其与给定的文本描述保持一致。文本在创造性方面具有无界性,使得我们可以用任何感官特征(如温度、情绪、气味、时间等)来描述场景的光照。然而,由于缺乏可扩展的、提供大量文本-重光照配对数据的数据集,对这种无界文本与光照之间映射关系的建模极具挑战性,因此当前的文本驱动图像编辑模型难以泛化到光照特定的应用场景。我们通过引入一种新颖的数据合成流程来克服这一问题:首先,利用大语言模型(例如 ChatGPT)在精心设计的层级结构下自动生成描述各种光照场景的多样化创意文本提示。随后,一个文本引导的图像生成模型创建出最匹配文本的光照图像。以这些光照图像作为条件,我们使用单张人像图像或从光阶系统捕获的一组 OLAT(逐次单光源)图像,对前景和背景进行基于图像的重光照。特别对于背景重光照,我们将光照图像表示为一组点光源,并将其迁移到其他背景图像上。一个生成式扩散模型通过辅助任务增强(例如人像去光照和光源定位)学习合成的大规模数据,以关联潜在文本与光照分布,从而实现文本引导的人像重光照。