Diffusion models have achieved impressive success in generating photorealistic images, but challenges remain in ensuring precise semantic alignment with input prompts. Optimizing the initial noisy latent offers a more efficient alternative to modifying model architectures or prompt engineering for improving semantic alignment. A latest approach, InitNo, refines the initial noisy latent by leveraging attention maps; however, these maps capture only limited information, and the effectiveness of InitNo is highly dependent on the initial starting point, as it tends to converge on a local optimum near this point. To this end, this paper proposes leveraging the language comprehension capabilities of large vision-language models (LVLMs) to guide the optimization of the initial noisy latent, and introduces the Noise Diffusion process, which updates the noisy latent to generate semantically faithful images while preserving distribution consistency. Furthermore, we provide a theoretical analysis of the condition under which the update improves semantic faithfulness. Experimental results demonstrate the effectiveness and adaptability of our framework, consistently enhancing semantic alignment across various diffusion models. The code is available at https://github.com/Bomingmiao/NoiseDiffusion.
翻译:扩散模型在生成逼真图像方面取得了显著成功,但在确保与输入提示的精确语义对齐方面仍面临挑战。相较于修改模型架构或提示工程,优化初始噪声潜在表示为提高语义对齐提供了一种更高效的替代方案。最新方法InitNo通过利用注意力图来优化初始噪声潜在表示;然而,这些注意力图仅捕获有限信息,且InitNo的有效性高度依赖于初始起点,因为它倾向于收敛于该起点附近的局部最优解。为此,本文提出利用大型视觉语言模型(LVLMs)的语言理解能力来指导初始噪声潜在表示的优化,并引入噪声扩散过程,该过程通过更新噪声潜在表示来生成语义忠实的图像,同时保持分布一致性。此外,我们从理论上分析了更新提升语义忠实度的条件。实验结果表明,我们的框架具有有效性和适应性,能够持续提升多种扩散模型的语义对齐效果。代码可在https://github.com/Bomingmiao/NoiseDiffusion获取。