Recent advances in diffusion models have brought remarkable visual fidelity to instruction-guided image editing. However, their global denoising process inherently entangles the edited region with the entire image context, leading to unintended spurious modifications and compromised adherence to editing instructions. In contrast, autoregressive models offer a distinct paradigm by formulating image synthesis as a sequential process over discrete visual tokens. Their causal and compositional mechanism naturally circumvents the adherence challenges of diffusion-based methods. In this paper, we present VAREdit, a visual autoregressive (VAR) framework that reframes image editing as a next-scale prediction problem. Conditioned on source image features and text instructions, VAREdit generates multi-scale target features to achieve precise edits. A core challenge in this paradigm is how to effectively condition the source image tokens. We observe that finest-scale source features cannot effectively guide the prediction of coarser target features. To bridge this gap, we introduce a Scale-Aligned Reference (SAR) module, which injects scale-matched conditioning information into the first self-attention layer. VAREdit demonstrates significant advancements in both editing adherence and efficiency. On EMU-Edit and PIE-Bench benchmarks, VAREdit outperforms leading diffusion-based methods by a substantial margin in terms of both CLIP and GPT scores. Moreover, VAREdit completes a 512$\times$512 editing in 1.2 seconds, making it 2.2$\times$ faster than the similarly sized UltraEdit. Code is available at: https://github.com/HiDream-ai/VAREdit.
翻译:近年来,扩散模型在指令引导图像编辑领域取得了显著进展,带来了卓越的视觉保真度。然而,其全局去噪过程本质上将编辑区域与整个图像上下文纠缠在一起,导致出现非预期的伪修改,并削弱了对编辑指令的遵循能力。相比之下,自回归模型提供了一种独特的范式,它将图像合成构建为在离散视觉标记上的序列生成过程。其因果性和组合性机制自然地规避了基于扩散方法在指令遵循方面的挑战。本文提出VAREdit,一种视觉自回归框架,它将图像编辑重新定义为一种下一尺度预测问题。在源图像特征和文本指令的条件下,VAREdit生成多尺度目标特征以实现精确编辑。该范式的一个核心挑战在于如何有效地对源图像标记进行条件化。我们观察到,最精细尺度的源特征无法有效指导较粗尺度目标特征的预测。为弥合这一差距,我们引入了尺度对齐参考模块,它将尺度匹配的条件信息注入到第一个自注意力层中。VAREdit在编辑遵循度和效率方面均展现出显著优势。在EMU-Edit和PIE-Bench基准测试中,VAREdit在CLIP和GPT评分上均大幅领先于主流的基于扩散的方法。此外,VAREdit在1.2秒内即可完成512$\times$512图像的编辑,比同等规模的UltraEdit快2.2倍。代码发布于:https://github.com/HiDream-ai/VAREdit。