Despite significant progress in diffusion-based image generation, subject-driven generation and instruction-based editing remain challenging. Existing methods typically treat them separately, struggling with limited high-quality data and poor generalization. However, both tasks require capturing complex visual variations while maintaining consistency between inputs and outputs. Therefore, we propose MIGE, a unified framework that standardizes task representations using multimodal instructions. It treats subject-driven generation as creation on a blank canvas and instruction-based editing as modification of an existing image, establishing a shared input-output formulation. MIGE introduces a novel multimodal encoder that maps free-form multimodal instructions into a unified vision-language space, integrating visual and semantic features through a feature fusion mechanism. This unification enables joint training of both tasks, providing two key advantages: (1) Cross-Task Enhancement: By leveraging shared visual and semantic representations, joint training improves instruction adherence and visual consistency in both subject-driven generation and instruction-based editing. (2) Generalization: Learning in a unified format facilitates cross-task knowledge transfer, enabling MIGE to generalize to novel compositional tasks, including instruction-based subject-driven editing. Experiments show that MIGE excels in both subject-driven generation and instruction-based editing while setting a state-of-the-art in the new task of instruction-based subject-driven editing. Code and model have been publicly available at https://github.com/Eureka-Maggie/MIGE.
翻译:尽管基于扩散的图像生成取得了显著进展,但主体驱动生成和基于指令的编辑仍然是具有挑战性的任务。现有方法通常将两者分开处理,受限于高质量数据匮乏和泛化能力不足的问题。然而,这两种任务都需要在捕捉复杂视觉变化的同时保持输入与输出之间的一致性。为此,我们提出MIGE,一个通过多模态指令标准化任务表示的统一框架。该框架将主体驱动生成视为在空白画布上进行创作,而将基于指令的编辑视为对现有图像的修改,从而建立了共享的输入-输出范式。MIGE引入了一种新颖的多模态编码器,可将自由形式的多模态指令映射到统一的视觉-语言空间,并通过特征融合机制整合视觉与语义特征。这种统一性使得两项任务能够进行联合训练,从而带来两个关键优势:(1)跨任务增强:通过共享视觉与语义表示,联合训练提升了主体驱动生成和基于指令编辑中的指令遵循能力和视觉一致性。(2)泛化能力:在统一格式下学习促进了跨任务知识迁移,使MIGE能够泛化至新颖的组合任务,包括基于指令的主体驱动编辑。实验表明,MIGE在主体驱动生成和基于指令的编辑任务中均表现优异,并在基于指令的主体驱动编辑这一新任务上达到了最先进的性能。代码与模型已在 https://github.com/Eureka-Maggie/MIGE 公开。