Diffusion models emerged as a leading approach in text-to-image generation, producing high-quality images from textual descriptions. However, attempting to achieve detailed control to get a desired image solely through text remains a laborious trial-and-error endeavor. Recent methods have introduced image-level controls alongside with text prompts, using prior images to extract conditional information such as edges, segmentation and depth maps. While effective, these methods apply conditions uniformly across the entire image, limiting localized control. In this paper, we propose a novel methodology to enable precise local control over user-defined regions of an image, while leaving to the diffusion model the task of autonomously generating the remaining areas according to the original prompt. Our approach introduces a new training framework that incorporates masking features and an additional loss term, which leverages the prediction of the initial latent vector at any diffusion step to enhance the correspondence between the current step and the final sample in the latent space. Extensive experiments demonstrate that our method effectively synthesizes high-quality images with controlled local conditions.
翻译:扩散模型已成为文本到图像生成领域的主流方法,能够根据文本描述生成高质量图像。然而,仅通过文本提示实现精细控制以获得预期图像,仍然需要耗费大量试错成本。近期研究提出了结合文本提示的图像级控制方法,利用参考图像提取边缘、分割和深度图等条件信息。这些方法虽有效,但将条件均匀施加于整幅图像,限制了局部控制能力。本文提出一种创新方法,使用户能够对图像中自定义区域实现精确局部控制,同时由扩散模型根据原始提示自主生成其余区域。我们的方法引入了包含掩码特征的新训练框架及附加损失项,该损失项利用任意扩散步骤中对初始潜在向量的预测,增强潜在空间中当前步骤与最终样本的对应关系。大量实验表明,本方法能有效合成符合局部控制条件的高质量图像。