Modern Text-to-Image (T2I) Diffusion models have revolutionized image editing by enabling the generation of high-quality photorealistic images. While the de facto method for performing edits with T2I models is through text instructions, this approach non-trivial due to the complex many-to-many mapping between natural language and images. In this work, we address exemplar-based image editing -- the task of transferring an edit from an exemplar pair to a content image(s). We propose ReEdit, a modular and efficient end-to-end framework that captures edits in both text and image modalities while ensuring the fidelity of the edited image. We validate the effectiveness of ReEdit through extensive comparisons with state-of-the-art baselines and sensitivity analyses of key design choices. Our results demonstrate that ReEdit consistently outperforms contemporary approaches both qualitatively and quantitatively. Additionally, ReEdit boasts high practical applicability, as it does not require any task-specific optimization and is four times faster than the next best baseline.
翻译:现代文本到图像(T2I)扩散模型通过生成高质量、逼真的图像,彻底改变了图像编辑领域。虽然使用T2I模型进行编辑的事实标准方法是通过文本指令,但由于自然语言与图像之间存在复杂的多对多映射关系,这种方法并非易事。在本工作中,我们研究基于示例的图像编辑——即将编辑从示例图像对迁移到内容图像的任务。我们提出了ReEdit,这是一个模块化且高效的端到端框架,能够同时捕获文本和图像模态中的编辑信息,同时确保编辑后图像的保真度。我们通过与最先进基线的广泛比较以及对关键设计选择的敏感性分析,验证了ReEdit的有效性。我们的结果表明,ReEdit在定性和定量评估中均持续优于现有方法。此外,ReEdit具有很高的实际适用性,因为它不需要任何特定于任务的优化,并且其速度是次优基线的四倍。