Diffusion models emerged as a leading approach in text-to-image generation, producing high-quality images from textual descriptions. However, attempting to achieve detailed control to get a desired image solely through text remains a laborious trial-and-error endeavor. Recent methods have introduced image-level controls alongside with text prompts, using prior images to extract conditional information such as edges, segmentation and depth maps. While effective, these methods apply conditions uniformly across the entire image, limiting localized control. In this paper, we propose a novel methodology to enable precise local control over user-defined regions of an image, while leaving to the diffusion model the task of autonomously generating the remaining areas according to the original prompt. Our approach introduces a new training framework that incorporates masking features and an additional loss term, which leverages the prediction of the initial latent vector at any diffusion step to enhance the correspondence between the current step and the final sample in the latent space. Extensive experiments demonstrate that our method effectively synthesizes high-quality images with controlled local conditions.
翻译:扩散模型已成为文本到图像生成领域的主流方法,能够根据文本描述生成高质量图像。然而,仅通过文本实现细节控制以获得期望图像仍是一项费力的试错过程。近期研究提出了结合文本提示的图像级控制方法,利用参考图像提取边缘、分割和深度图等条件信息。这些方法虽有效,但对整幅图像施加统一条件,限制了局部控制能力。本文提出一种创新方法,使用户能够对图像中自定义区域实现精确局部控制,同时由扩散模型根据原始提示自主生成其余区域。我们的方法引入包含掩码特征的新型训练框架及附加损失项,该损失项利用任意扩散步骤中初始潜在向量的预测来增强潜在空间内当前步骤与最终样本间的对应关系。大量实验表明,本方法能有效合成具有可控局部条件的高质量图像。