This paper presents a novel method for exerting fine-grained lighting control during text-driven diffusion-based image generation. While existing diffusion models already have the ability to generate images under any lighting condition, without additional guidance these models tend to correlate image content and lighting. Moreover, text prompts lack the necessary expressional power to describe detailed lighting setups. To provide the content creator with fine-grained control over the lighting during image generation, we augment the text-prompt with detailed lighting information in the form of radiance hints, i.e., visualizations of the scene geometry with a homogeneous canonical material under the target lighting. However, the scene geometry needed to produce the radiance hints is unknown. Our key observation is that we only need to guide the diffusion process, hence exact radiance hints are not necessary; we only need to point the diffusion model in the right direction. Based on this observation, we introduce a three stage method for controlling the lighting during image generation. In the first stage, we leverage a standard pretrained diffusion model to generate a provisional image under uncontrolled lighting. Next, in the second stage, we resynthesize and refine the foreground object in the generated image by passing the target lighting to a refined diffusion model, named DiLightNet, using radiance hints computed on a coarse shape of the foreground object inferred from the provisional image. To retain the texture details, we multiply the radiance hints with a neural encoding of the provisional synthesized image before passing it to DiLightNet. Finally, in the third stage, we resynthesize the background to be consistent with the lighting on the foreground object. We demonstrate and validate our lighting controlled diffusion model on a variety of text prompts and lighting conditions.
翻译:本文提出了一种在文本驱动的扩散模型图像生成过程中实现细粒度光照控制的新方法。尽管现有的扩散模型已具备在任何光照条件下生成图像的能力,但在缺乏额外引导的情况下,这些模型往往会使图像内容与光照产生关联。此外,文本提示缺乏描述详细光照设置所需的表达能力。为了让内容创作者在图像生成过程中对光照进行细粒度控制,我们通过辐射提示的形式——即在目标光照下具有均匀标准材质的场景几何可视化——向文本提示添加详细的光照信息。然而,生成辐射提示所需的场景几何是未知的。我们的关键观察是:我们仅需引导扩散过程,因此精确的辐射提示并非必需;我们只需要为扩散模型指明正确方向。基于这一观察,我们提出了一种三阶段方法用于在图像生成过程中控制光照。在第一阶段,我们利用标准预训练的扩散模型生成一张光照未受控制的临时图像。接着在第二阶段,我们通过将目标光照传递给名为DiLightNet的优化扩散模型,使用基于临时图像推断出的前景物体粗略形状计算出的辐射提示,对生成图像中的前景物体进行重新合成与精细化处理。为保留纹理细节,我们在将辐射提示输入DiLightNet之前,将其与临时合成图像的神经编码相乘。最后在第三阶段,我们重新合成背景使其与前景物体的光照保持一致。我们在多种文本提示和光照条件下对所提出的光照控制扩散模型进行了演示与验证。