Instruction-based image editing has achieved remarkable progress; however, models solely trained via supervised fine-tuning often overfit to annotated patterns, hindering their ability to explore and generalize beyond training distributions. To this end, we introduce Edit-R1, a novel post-training framework for instruction-based image editing based on policy optimization. Specifically, we utilize Diffusion Negative-aware Finetuning (DiffusionNFT), a likelihood-free policy optimization method consistent with the flow matching forward process, thereby enabling the use of higher-order samplers and more efficient training. Another key challenge here is the absence of a universal reward model, resulting from the diverse nature of editing instructions and tasks. To bridge this gap, we employ a Multimodal Large Language Model (MLLM) as a unified, training-free reward model, leveraging its output logits to provide fine-grained feedback. Furthermore, we carefully design a low-variance group filtering mechanism to reduce MLLM scoring noise and stabilize optimization. UniWorld-V2, trained with this framework, achieves \textbf{state-of-the-art} results on the ImgEdit and GEdit-Bench benchmarks, scoring 4.49 and 7.83, respectively. Crucially, our framework is model-agnostic, delivering substantial performance gains when applied to diverse base models like Qwen-Image-Edit and FLUX-Kontext, demonstrating its wide applicability. Code and models are publicly available at https://github.com/PKU-YuanGroup/UniWorld-V2.
翻译:基于指令的图像编辑已取得显著进展;然而,仅通过监督微调训练的模型常常会过拟合到标注模式,阻碍了其探索和泛化到训练分布之外的能力。为此,我们提出了Edit-R1,一个基于策略优化的、用于指令图像编辑的新型后训练框架。具体而言,我们采用扩散负感知微调(DiffusionNFT),这是一种与流匹配前向过程一致的无似然策略优化方法,从而能够使用更高阶的采样器进行更高效的训练。此处的另一个关键挑战在于缺乏一个通用的奖励模型,这源于编辑指令和任务的多样性。为了弥合这一差距,我们采用多模态大语言模型(MLLM)作为一个统一的、无需训练的奖励模型,利用其输出逻辑值来提供细粒度的反馈。此外,我们精心设计了一种低方差组过滤机制,以减少MLLM评分噪声并稳定优化过程。使用此框架训练的UniWorld-V2在ImgEdit和GEdit-Bench基准测试中取得了\textbf{最先进}的结果,得分分别为4.49和7.83。至关重要的是,我们的框架是模型无关的,当应用于如Qwen-Image-Edit和FLUX-Kontext等不同的基础模型时,均能带来显著的性能提升,这证明了其广泛的适用性。代码和模型已在 https://github.com/PKU-YuanGroup/UniWorld-V2 公开提供。