Text-to-image diffusion models have recently received increasing interest for their astonishing ability to produce high-fidelity images from solely text inputs. Subsequent research efforts aim to exploit and apply their capabilities to real image editing. However, existing image-to-image methods are often inefficient, imprecise, and of limited versatility. They either require time-consuming finetuning, deviate unnecessarily strongly from the input image, and/or lack support for multiple, simultaneous edits. To address these issues, we introduce LEDITS++, an efficient yet versatile and precise textual image manipulation technique. LEDITS++'s novel inversion approach requires no tuning nor optimization and produces high-fidelity results with a few diffusion steps. Second, our methodology supports multiple simultaneous edits and is architecture-agnostic. Third, we use a novel implicit masking technique that limits changes to relevant image regions. We propose the novel TEdBench++ benchmark as part of our exhaustive evaluation. Our results demonstrate the capabilities of LEDITS++ and its improvements over previous methods.
翻译:文本到图像扩散模型近期因其仅凭文本输入即可生成高保真图像的能力而受到广泛关注。后续研究致力于挖掘并应用此类模型于真实图像编辑任务。然而,现有的图像到图像方法往往存在效率低下、精度不足及泛用性有限的问题:它们或需耗时的微调过程,或与输入图像产生不必要的显著偏差,且/或缺乏对多重同步编辑的支持。为应对这些挑战,本文提出LEDITS++——一种高效、通用且精确的文本驱动图像处理技术。LEDITS++采用无需调参或优化的新型反演方法,仅需少量扩散步骤即可生成高保真结果;其次,该方法支持多重同步编辑且具备架构无关性;再者,我们通过新型隐式掩码技术将修改范围限制在相关图像区域。作为系统性评估的一部分,我们构建了创新的TEdBench++基准测试集。实验结果验证了LEDITS++的技术优势及其对现有方法的显著改进。