Diffusion models have revolutionized image generation and editing, producing state-of-the-art results in conditioned and unconditioned image synthesis. While current techniques enable user control over the degree of change in an image edit, the controllability is limited to global changes over an entire edited region. This paper introduces a novel framework that enables customization of the amount of change per pixel or per image region. Our framework can be integrated into any existing diffusion model, enhancing it with this capability. Such granular control on the quantity of change opens up a diverse array of new editing capabilities, such as control of the extent to which individual objects are modified, or the ability to introduce gradual spatial changes. Furthermore, we showcase the framework's effectiveness in soft-inpainting -- the completion of portions of an image while subtly adjusting the surrounding areas to ensure seamless integration. Additionally, we introduce a new tool for exploring the effects of different change quantities. Our framework operates solely during inference, requiring no model training or fine-tuning. We demonstrate our method with the current open state-of-the-art models, and validate it via both quantitative and qualitative comparisons, and a user study. Our code is available at: https://github.com/exx8/differential-diffusion
翻译:扩散模型已彻底改变了图像生成与编辑领域,在条件与非条件图像合成中取得了最先进的结果。尽管现有技术允许用户控制图像编辑过程中的变化程度,但这种可控性仅限于对整个编辑区域的全局性调整。本文提出了一种新颖框架,能够实现对每个像素或每个图像区域变化量的定制化控制。该框架可集成到任何现有扩散模型中,为其增强此功能。这种对变化量的精细控制开启了一系列全新的编辑能力,例如控制单个对象被修改的程度,或引入渐进的空间变化。此外,我们展示了该框架在软修补(soft-inpainting)中的有效性——即在完成图像部分区域的同时,微妙调整周边区域以确保无缝融合。我们还引入了一个新工具,用于探索不同变化量的效果。该框架仅在推理阶段运行,无需模型训练或微调。我们使用当前开源的最先进模型演示了该方法,并通过定量与定性比较以及用户研究对其进行了验证。我们的代码可访问:https://github.com/exx8/differential-diffusion