Recent research explores the potential of Diffusion Models (DMs) for consistent object editing, which aims to modify object position, size, and composition, etc., while preserving the consistency of objects and background without changing their texture and attributes. Current inference-time methods often rely on DDIM inversion, which inherently compromises efficiency and the achievable consistency of edited images. Recent methods also utilize energy guidance which iteratively updates the predicted noise and can drive the latents away from the original image, resulting in distortions. In this paper, we propose PixelMan, an inversion-free and training-free method for achieving consistent object editing via Pixel Manipulation and generation, where we directly create a duplicate copy of the source object at target location in the pixel space, and introduce an efficient sampling approach to iteratively harmonize the manipulated object into the target location and inpaint its original location, while ensuring image consistency by anchoring the edited image to be generated to the pixel-manipulated image as well as by introducing various consistency-preserving optimization techniques during inference. Experimental evaluations based on benchmark datasets as well as extensive visual comparisons show that in as few as 16 inference steps, PixelMan outperforms a range of state-of-the-art training-based and training-free methods (usually requiring 50 steps) on multiple consistent object editing tasks.
翻译:近期研究探索了扩散模型在一致性物体编辑方面的潜力,该任务旨在修改物体位置、尺寸及构图等属性,同时保持物体与背景的一致性而不改变其纹理与固有属性。现有推理方法通常依赖DDIM反转,这本质上会损害效率与编辑图像所能达到的一致性。最新方法也采用能量引导技术,通过迭代更新预测噪声,但这可能导致隐变量偏离原始图像而产生失真。本文提出PixelMan——一种无需反转与训练的方法,通过像素操控与生成实现一致性物体编辑。该方法直接在像素空间为目标位置创建源物体的复制副本,并引入高效采样策略以迭代方式将操控物体融合至目标位置并修复其原始区域。通过将待生成编辑图像锚定于像素操控图像,并在推理过程中采用多种一致性保持优化技术,确保图像一致性。基于基准数据集的实验评估及大量视觉对比表明,仅需16步推理,PixelMan在多项一致性物体编辑任务上均优于一系列当前最先进的基于训练与免训练方法(后者通常需要50步)。